00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1908 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3169 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.128 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.129 The recommended git tool is: git 00:00:00.129 using credential 00000000-0000-0000-0000-000000000002 00:00:00.131 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.179 Fetching changes from the remote Git repository 00:00:00.181 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.222 Using shallow fetch with depth 1 00:00:00.223 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.223 > git --version # timeout=10 00:00:00.264 > git --version # 'git version 2.39.2' 00:00:00.264 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.291 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.291 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.136 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.148 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.160 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:06.160 > git config core.sparsecheckout # timeout=10 00:00:06.172 > git read-tree -mu HEAD # timeout=10 00:00:06.188 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:06.207 Commit message: "pool: fixes for VisualBuild class" 00:00:06.207 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:06.302 [Pipeline] Start of Pipeline 00:00:06.316 [Pipeline] library 00:00:06.317 Loading library shm_lib@master 00:00:06.318 Library shm_lib@master is cached. Copying from home. 00:00:06.335 [Pipeline] node 00:00:06.347 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.349 [Pipeline] { 00:00:06.359 [Pipeline] catchError 00:00:06.361 [Pipeline] { 00:00:06.373 [Pipeline] wrap 00:00:06.383 [Pipeline] { 00:00:06.391 [Pipeline] stage 00:00:06.393 [Pipeline] { (Prologue) 00:00:06.590 [Pipeline] sh 00:00:06.878 + logger -p user.info -t JENKINS-CI 00:00:06.893 [Pipeline] echo 00:00:06.895 Node: CYP9 00:00:06.901 [Pipeline] sh 00:00:07.202 [Pipeline] setCustomBuildProperty 00:00:07.213 [Pipeline] echo 00:00:07.215 Cleanup processes 00:00:07.220 [Pipeline] sh 00:00:07.506 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.506 3777366 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.520 [Pipeline] sh 00:00:07.816 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.816 ++ grep -v 'sudo pgrep' 00:00:07.816 ++ awk '{print $1}' 00:00:07.816 + sudo kill -9 00:00:07.816 + true 00:00:07.886 [Pipeline] cleanWs 00:00:07.899 [WS-CLEANUP] Deleting project workspace... 00:00:07.899 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.907 [WS-CLEANUP] done 00:00:07.910 [Pipeline] setCustomBuildProperty 00:00:07.921 [Pipeline] sh 00:00:08.205 + sudo git config --global --replace-all safe.directory '*' 00:00:08.277 [Pipeline] nodesByLabel 00:00:08.278 Found a total of 2 nodes with the 'sorcerer' label 00:00:08.286 [Pipeline] httpRequest 00:00:08.290 HttpMethod: GET 00:00:08.291 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:08.294 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:08.297 Response Code: HTTP/1.1 200 OK 00:00:08.298 Success: Status code 200 is in the accepted range: 200,404 00:00:08.298 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:09.406 [Pipeline] sh 00:00:09.698 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:09.717 [Pipeline] httpRequest 00:00:09.724 HttpMethod: GET 00:00:09.724 URL: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:09.725 Sending request to url: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:09.746 Response Code: HTTP/1.1 200 OK 00:00:09.746 Success: Status code 200 is in the accepted range: 200,404 00:00:09.747 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:16.773 [Pipeline] sh 00:01:17.061 + tar --no-same-owner -xf spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:01:19.624 [Pipeline] sh 00:01:19.912 + git -C spdk log --oneline -n5 00:01:19.912 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:01:19.912 5d3fd6726 bdev: Fix a race bug between unregistration and QoS poller 00:01:19.912 fbc673ece test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:01:19.912 3651466d0 test/scheduler: Calculate median of the cpu load samples 00:01:19.912 a7414547f test/scheduler: Make sure stderr is not O_TRUNCated in move_proc() 00:01:19.925 [Pipeline] } 00:01:19.942 [Pipeline] // stage 00:01:19.952 [Pipeline] stage 00:01:19.954 [Pipeline] { (Prepare) 00:01:19.972 [Pipeline] writeFile 00:01:19.988 [Pipeline] sh 00:01:20.274 + logger -p user.info -t JENKINS-CI 00:01:20.288 [Pipeline] sh 00:01:20.576 + logger -p user.info -t JENKINS-CI 00:01:20.591 [Pipeline] sh 00:01:20.877 + cat autorun-spdk.conf 00:01:20.877 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.877 SPDK_TEST_NVMF=1 00:01:20.877 SPDK_TEST_NVME_CLI=1 00:01:20.877 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.877 SPDK_TEST_NVMF_NICS=e810 00:01:20.877 SPDK_RUN_UBSAN=1 00:01:20.877 NET_TYPE=phy 00:01:20.885 RUN_NIGHTLY=1 00:01:20.889 [Pipeline] readFile 00:01:20.912 [Pipeline] withEnv 00:01:20.914 [Pipeline] { 00:01:20.928 [Pipeline] sh 00:01:21.216 + set -ex 00:01:21.216 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:21.216 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:21.216 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.216 ++ SPDK_TEST_NVMF=1 00:01:21.216 ++ SPDK_TEST_NVME_CLI=1 00:01:21.216 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.216 ++ SPDK_TEST_NVMF_NICS=e810 00:01:21.216 ++ SPDK_RUN_UBSAN=1 00:01:21.216 ++ NET_TYPE=phy 00:01:21.216 ++ RUN_NIGHTLY=1 00:01:21.216 + case $SPDK_TEST_NVMF_NICS in 00:01:21.216 + DRIVERS=ice 00:01:21.216 + [[ tcp == \r\d\m\a ]] 00:01:21.216 + [[ -n ice ]] 00:01:21.216 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:21.216 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:21.216 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:21.216 rmmod: ERROR: Module irdma is not currently loaded 00:01:21.216 rmmod: ERROR: Module i40iw is not currently loaded 00:01:21.216 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:21.216 + true 00:01:21.216 + for D in $DRIVERS 00:01:21.216 + sudo modprobe ice 00:01:21.216 + exit 0 00:01:21.226 [Pipeline] } 00:01:21.243 [Pipeline] // withEnv 00:01:21.247 [Pipeline] } 00:01:21.261 [Pipeline] // stage 00:01:21.268 [Pipeline] catchError 00:01:21.269 [Pipeline] { 00:01:21.282 [Pipeline] timeout 00:01:21.282 Timeout set to expire in 50 min 00:01:21.284 [Pipeline] { 00:01:21.298 [Pipeline] stage 00:01:21.300 [Pipeline] { (Tests) 00:01:21.314 [Pipeline] sh 00:01:21.635 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.635 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.635 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.635 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:21.635 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:21.635 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:21.635 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:21.635 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:21.635 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:21.635 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:21.635 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:21.635 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:21.635 + source /etc/os-release 00:01:21.635 ++ NAME='Fedora Linux' 00:01:21.635 ++ VERSION='38 (Cloud Edition)' 00:01:21.635 ++ ID=fedora 00:01:21.635 ++ VERSION_ID=38 00:01:21.635 ++ VERSION_CODENAME= 00:01:21.635 ++ PLATFORM_ID=platform:f38 00:01:21.635 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:21.635 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.635 ++ LOGO=fedora-logo-icon 00:01:21.635 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:21.635 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.635 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:21.635 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.635 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.635 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.635 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:21.635 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.635 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:21.635 ++ SUPPORT_END=2024-05-14 00:01:21.635 ++ VARIANT='Cloud Edition' 00:01:21.635 ++ VARIANT_ID=cloud 00:01:21.635 + uname -a 00:01:21.635 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:21.635 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:24.182 Hugepages 00:01:24.182 node hugesize free / total 00:01:24.182 node0 1048576kB 0 / 0 00:01:24.182 node0 2048kB 0 / 0 00:01:24.182 node1 1048576kB 0 / 0 00:01:24.182 node1 2048kB 0 / 0 00:01:24.182 00:01:24.182 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:24.182 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:24.182 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:24.182 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:24.182 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:24.182 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:24.182 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:24.182 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:24.182 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:24.182 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:24.182 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:24.182 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:24.182 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:24.182 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:24.182 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:24.182 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:24.182 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:24.182 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:24.182 + rm -f /tmp/spdk-ld-path 00:01:24.182 + source autorun-spdk.conf 00:01:24.182 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.182 ++ SPDK_TEST_NVMF=1 00:01:24.182 ++ SPDK_TEST_NVME_CLI=1 00:01:24.182 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.182 ++ SPDK_TEST_NVMF_NICS=e810 00:01:24.182 ++ SPDK_RUN_UBSAN=1 00:01:24.182 ++ NET_TYPE=phy 00:01:24.182 ++ RUN_NIGHTLY=1 00:01:24.182 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:24.182 + [[ -n '' ]] 00:01:24.182 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.182 + for M in /var/spdk/build-*-manifest.txt 00:01:24.182 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:24.182 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:24.182 + for M in /var/spdk/build-*-manifest.txt 00:01:24.182 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:24.182 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:24.182 ++ uname 00:01:24.182 + [[ Linux == \L\i\n\u\x ]] 00:01:24.182 + sudo dmesg -T 00:01:24.182 + sudo dmesg --clear 00:01:24.182 + dmesg_pid=3778327 00:01:24.182 + [[ Fedora Linux == FreeBSD ]] 00:01:24.182 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.182 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.182 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:24.182 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:24.182 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:24.182 + [[ -x /usr/src/fio-static/fio ]] 00:01:24.182 + sudo dmesg -Tw 00:01:24.182 + export FIO_BIN=/usr/src/fio-static/fio 00:01:24.182 + FIO_BIN=/usr/src/fio-static/fio 00:01:24.182 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:24.182 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:24.182 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:24.182 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.182 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:24.182 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:24.182 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.182 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:24.182 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.182 Test configuration: 00:01:24.182 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.182 SPDK_TEST_NVMF=1 00:01:24.182 SPDK_TEST_NVME_CLI=1 00:01:24.182 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.182 SPDK_TEST_NVMF_NICS=e810 00:01:24.182 SPDK_RUN_UBSAN=1 00:01:24.182 NET_TYPE=phy 00:01:24.182 RUN_NIGHTLY=1 22:43:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:24.182 22:43:52 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.182 22:43:52 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.182 22:43:52 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.182 22:43:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.182 22:43:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.182 22:43:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.182 22:43:52 -- paths/export.sh@5 -- $ export PATH 00:01:24.183 22:43:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:24.183 22:43:52 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:24.183 22:43:52 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:24.183 22:43:52 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1717965832.XXXXXX 00:01:24.183 22:43:52 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1717965832.3BFi4e 00:01:24.183 22:43:52 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:24.183 22:43:52 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:24.183 22:43:52 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:24.183 22:43:52 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:24.183 22:43:52 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.183 22:43:52 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:24.183 22:43:52 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:24.183 22:43:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.183 22:43:52 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:24.183 22:43:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.183 22:43:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.183 22:43:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.183 22:43:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.183 Sun Jun 9 08:43:52 PM UTC 2024 00:01:24.183 22:43:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.183 LTS-43-g130b9406a 00:01:24.183 22:43:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:24.183 22:43:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.183 22:43:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.183 22:43:52 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:24.183 22:43:52 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:24.183 22:43:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.183 ************************************ 00:01:24.183 START TEST ubsan 00:01:24.183 ************************************ 00:01:24.183 22:43:52 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:24.183 using ubsan 00:01:24.183 00:01:24.183 real 0m0.001s 00:01:24.183 user 0m0.000s 00:01:24.183 sys 0m0.000s 00:01:24.183 22:43:52 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:24.183 22:43:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.183 ************************************ 00:01:24.183 END TEST ubsan 00:01:24.183 ************************************ 00:01:24.183 22:43:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:24.183 22:43:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:24.183 22:43:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:24.183 22:43:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:24.183 22:43:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:24.183 22:43:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:24.183 22:43:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:24.183 22:43:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:24.183 22:43:52 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:24.444 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:24.444 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:24.704 Using 'verbs' RDMA provider 00:01:40.190 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:52.426 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:52.426 Creating mk/config.mk...done. 00:01:52.426 Creating mk/cc.flags.mk...done. 00:01:52.426 Type 'make' to build. 00:01:52.426 22:44:19 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:52.426 22:44:19 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:52.426 22:44:19 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:52.426 22:44:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.426 ************************************ 00:01:52.426 START TEST make 00:01:52.426 ************************************ 00:01:52.426 22:44:19 -- common/autotest_common.sh@1104 -- $ make -j144 00:01:52.426 make[1]: Nothing to be done for 'all'. 00:02:00.572 The Meson build system 00:02:00.572 Version: 1.3.1 00:02:00.572 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:00.572 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:00.572 Build type: native build 00:02:00.572 Program cat found: YES (/usr/bin/cat) 00:02:00.572 Project name: DPDK 00:02:00.572 Project version: 23.11.0 00:02:00.572 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:00.572 C linker for the host machine: cc ld.bfd 2.39-16 00:02:00.572 Host machine cpu family: x86_64 00:02:00.572 Host machine cpu: x86_64 00:02:00.572 Message: ## Building in Developer Mode ## 00:02:00.572 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:00.572 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:00.572 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:00.572 Program python3 found: YES (/usr/bin/python3) 00:02:00.572 Program cat found: YES (/usr/bin/cat) 00:02:00.572 Compiler for C supports arguments -march=native: YES 00:02:00.572 Checking for size of "void *" : 8 00:02:00.572 Checking for size of "void *" : 8 (cached) 00:02:00.572 Library m found: YES 00:02:00.572 Library numa found: YES 00:02:00.572 Has header "numaif.h" : YES 00:02:00.572 Library fdt found: NO 00:02:00.572 Library execinfo found: NO 00:02:00.572 Has header "execinfo.h" : YES 00:02:00.572 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:00.572 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:00.572 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:00.572 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:00.572 Run-time dependency openssl found: YES 3.0.9 00:02:00.572 Run-time dependency libpcap found: YES 1.10.4 00:02:00.572 Has header "pcap.h" with dependency libpcap: YES 00:02:00.572 Compiler for C supports arguments -Wcast-qual: YES 00:02:00.572 Compiler for C supports arguments -Wdeprecated: YES 00:02:00.572 Compiler for C supports arguments -Wformat: YES 00:02:00.572 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:00.572 Compiler for C supports arguments -Wformat-security: NO 00:02:00.572 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:00.572 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:00.572 Compiler for C supports arguments -Wnested-externs: YES 00:02:00.572 Compiler for C supports arguments -Wold-style-definition: YES 00:02:00.572 Compiler for C supports arguments -Wpointer-arith: YES 00:02:00.572 Compiler for C supports arguments -Wsign-compare: YES 00:02:00.572 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:00.572 Compiler for C supports arguments -Wundef: YES 00:02:00.572 Compiler for C supports arguments -Wwrite-strings: YES 00:02:00.572 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:00.572 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:00.572 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:00.572 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:00.572 Program objdump found: YES (/usr/bin/objdump) 00:02:00.572 Compiler for C supports arguments -mavx512f: YES 00:02:00.572 Checking if "AVX512 checking" compiles: YES 00:02:00.572 Fetching value of define "__SSE4_2__" : 1 00:02:00.572 Fetching value of define "__AES__" : 1 00:02:00.572 Fetching value of define "__AVX__" : 1 00:02:00.572 Fetching value of define "__AVX2__" : 1 00:02:00.573 Fetching value of define "__AVX512BW__" : 1 00:02:00.573 Fetching value of define "__AVX512CD__" : 1 00:02:00.573 Fetching value of define "__AVX512DQ__" : 1 00:02:00.573 Fetching value of define "__AVX512F__" : 1 00:02:00.573 Fetching value of define "__AVX512VL__" : 1 00:02:00.573 Fetching value of define "__PCLMUL__" : 1 00:02:00.573 Fetching value of define "__RDRND__" : 1 00:02:00.573 Fetching value of define "__RDSEED__" : 1 00:02:00.573 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:00.573 Fetching value of define "__znver1__" : (undefined) 00:02:00.573 Fetching value of define "__znver2__" : (undefined) 00:02:00.573 Fetching value of define "__znver3__" : (undefined) 00:02:00.573 Fetching value of define "__znver4__" : (undefined) 00:02:00.573 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:00.573 Message: lib/log: Defining dependency "log" 00:02:00.573 Message: lib/kvargs: Defining dependency "kvargs" 00:02:00.573 Message: lib/telemetry: Defining dependency "telemetry" 00:02:00.573 Checking for function "getentropy" : NO 00:02:00.573 Message: lib/eal: Defining dependency "eal" 00:02:00.573 Message: lib/ring: Defining dependency "ring" 00:02:00.573 Message: lib/rcu: Defining dependency "rcu" 00:02:00.573 Message: lib/mempool: Defining dependency "mempool" 00:02:00.573 Message: lib/mbuf: Defining dependency "mbuf" 00:02:00.573 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:00.573 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:00.573 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:00.573 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:00.573 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:00.573 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:00.573 Compiler for C supports arguments -mpclmul: YES 00:02:00.573 Compiler for C supports arguments -maes: YES 00:02:00.573 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.573 Compiler for C supports arguments -mavx512bw: YES 00:02:00.573 Compiler for C supports arguments -mavx512dq: YES 00:02:00.573 Compiler for C supports arguments -mavx512vl: YES 00:02:00.573 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:00.573 Compiler for C supports arguments -mavx2: YES 00:02:00.573 Compiler for C supports arguments -mavx: YES 00:02:00.573 Message: lib/net: Defining dependency "net" 00:02:00.573 Message: lib/meter: Defining dependency "meter" 00:02:00.573 Message: lib/ethdev: Defining dependency "ethdev" 00:02:00.573 Message: lib/pci: Defining dependency "pci" 00:02:00.573 Message: lib/cmdline: Defining dependency "cmdline" 00:02:00.573 Message: lib/hash: Defining dependency "hash" 00:02:00.573 Message: lib/timer: Defining dependency "timer" 00:02:00.573 Message: lib/compressdev: Defining dependency "compressdev" 00:02:00.573 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:00.573 Message: lib/dmadev: Defining dependency "dmadev" 00:02:00.573 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:00.573 Message: lib/power: Defining dependency "power" 00:02:00.573 Message: lib/reorder: Defining dependency "reorder" 00:02:00.573 Message: lib/security: Defining dependency "security" 00:02:00.573 Has header "linux/userfaultfd.h" : YES 00:02:00.573 Has header "linux/vduse.h" : YES 00:02:00.573 Message: lib/vhost: Defining dependency "vhost" 00:02:00.573 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:00.573 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:00.573 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:00.573 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:00.573 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:00.573 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:00.573 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:00.573 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:00.573 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:00.573 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:00.573 Program doxygen found: YES (/usr/bin/doxygen) 00:02:00.573 Configuring doxy-api-html.conf using configuration 00:02:00.573 Configuring doxy-api-man.conf using configuration 00:02:00.573 Program mandb found: YES (/usr/bin/mandb) 00:02:00.573 Program sphinx-build found: NO 00:02:00.573 Configuring rte_build_config.h using configuration 00:02:00.573 Message: 00:02:00.573 ================= 00:02:00.573 Applications Enabled 00:02:00.573 ================= 00:02:00.573 00:02:00.573 apps: 00:02:00.573 00:02:00.573 00:02:00.573 Message: 00:02:00.573 ================= 00:02:00.573 Libraries Enabled 00:02:00.573 ================= 00:02:00.573 00:02:00.573 libs: 00:02:00.573 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:00.573 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:00.573 cryptodev, dmadev, power, reorder, security, vhost, 00:02:00.573 00:02:00.573 Message: 00:02:00.573 =============== 00:02:00.573 Drivers Enabled 00:02:00.573 =============== 00:02:00.573 00:02:00.573 common: 00:02:00.573 00:02:00.573 bus: 00:02:00.573 pci, vdev, 00:02:00.573 mempool: 00:02:00.573 ring, 00:02:00.573 dma: 00:02:00.573 00:02:00.573 net: 00:02:00.573 00:02:00.573 crypto: 00:02:00.573 00:02:00.573 compress: 00:02:00.573 00:02:00.573 vdpa: 00:02:00.573 00:02:00.573 00:02:00.573 Message: 00:02:00.573 ================= 00:02:00.573 Content Skipped 00:02:00.573 ================= 00:02:00.573 00:02:00.573 apps: 00:02:00.573 dumpcap: explicitly disabled via build config 00:02:00.573 graph: explicitly disabled via build config 00:02:00.573 pdump: explicitly disabled via build config 00:02:00.573 proc-info: explicitly disabled via build config 00:02:00.573 test-acl: explicitly disabled via build config 00:02:00.573 test-bbdev: explicitly disabled via build config 00:02:00.573 test-cmdline: explicitly disabled via build config 00:02:00.573 test-compress-perf: explicitly disabled via build config 00:02:00.573 test-crypto-perf: explicitly disabled via build config 00:02:00.573 test-dma-perf: explicitly disabled via build config 00:02:00.573 test-eventdev: explicitly disabled via build config 00:02:00.573 test-fib: explicitly disabled via build config 00:02:00.573 test-flow-perf: explicitly disabled via build config 00:02:00.573 test-gpudev: explicitly disabled via build config 00:02:00.573 test-mldev: explicitly disabled via build config 00:02:00.573 test-pipeline: explicitly disabled via build config 00:02:00.573 test-pmd: explicitly disabled via build config 00:02:00.573 test-regex: explicitly disabled via build config 00:02:00.573 test-sad: explicitly disabled via build config 00:02:00.573 test-security-perf: explicitly disabled via build config 00:02:00.573 00:02:00.573 libs: 00:02:00.573 metrics: explicitly disabled via build config 00:02:00.573 acl: explicitly disabled via build config 00:02:00.573 bbdev: explicitly disabled via build config 00:02:00.573 bitratestats: explicitly disabled via build config 00:02:00.573 bpf: explicitly disabled via build config 00:02:00.573 cfgfile: explicitly disabled via build config 00:02:00.573 distributor: explicitly disabled via build config 00:02:00.573 efd: explicitly disabled via build config 00:02:00.573 eventdev: explicitly disabled via build config 00:02:00.573 dispatcher: explicitly disabled via build config 00:02:00.573 gpudev: explicitly disabled via build config 00:02:00.573 gro: explicitly disabled via build config 00:02:00.573 gso: explicitly disabled via build config 00:02:00.573 ip_frag: explicitly disabled via build config 00:02:00.573 jobstats: explicitly disabled via build config 00:02:00.573 latencystats: explicitly disabled via build config 00:02:00.573 lpm: explicitly disabled via build config 00:02:00.573 member: explicitly disabled via build config 00:02:00.573 pcapng: explicitly disabled via build config 00:02:00.573 rawdev: explicitly disabled via build config 00:02:00.573 regexdev: explicitly disabled via build config 00:02:00.573 mldev: explicitly disabled via build config 00:02:00.573 rib: explicitly disabled via build config 00:02:00.573 sched: explicitly disabled via build config 00:02:00.573 stack: explicitly disabled via build config 00:02:00.573 ipsec: explicitly disabled via build config 00:02:00.573 pdcp: explicitly disabled via build config 00:02:00.573 fib: explicitly disabled via build config 00:02:00.573 port: explicitly disabled via build config 00:02:00.573 pdump: explicitly disabled via build config 00:02:00.573 table: explicitly disabled via build config 00:02:00.573 pipeline: explicitly disabled via build config 00:02:00.573 graph: explicitly disabled via build config 00:02:00.573 node: explicitly disabled via build config 00:02:00.573 00:02:00.573 drivers: 00:02:00.573 common/cpt: not in enabled drivers build config 00:02:00.573 common/dpaax: not in enabled drivers build config 00:02:00.573 common/iavf: not in enabled drivers build config 00:02:00.573 common/idpf: not in enabled drivers build config 00:02:00.573 common/mvep: not in enabled drivers build config 00:02:00.574 common/octeontx: not in enabled drivers build config 00:02:00.574 bus/auxiliary: not in enabled drivers build config 00:02:00.574 bus/cdx: not in enabled drivers build config 00:02:00.574 bus/dpaa: not in enabled drivers build config 00:02:00.574 bus/fslmc: not in enabled drivers build config 00:02:00.574 bus/ifpga: not in enabled drivers build config 00:02:00.574 bus/platform: not in enabled drivers build config 00:02:00.574 bus/vmbus: not in enabled drivers build config 00:02:00.574 common/cnxk: not in enabled drivers build config 00:02:00.574 common/mlx5: not in enabled drivers build config 00:02:00.574 common/nfp: not in enabled drivers build config 00:02:00.574 common/qat: not in enabled drivers build config 00:02:00.574 common/sfc_efx: not in enabled drivers build config 00:02:00.574 mempool/bucket: not in enabled drivers build config 00:02:00.574 mempool/cnxk: not in enabled drivers build config 00:02:00.574 mempool/dpaa: not in enabled drivers build config 00:02:00.574 mempool/dpaa2: not in enabled drivers build config 00:02:00.574 mempool/octeontx: not in enabled drivers build config 00:02:00.574 mempool/stack: not in enabled drivers build config 00:02:00.574 dma/cnxk: not in enabled drivers build config 00:02:00.574 dma/dpaa: not in enabled drivers build config 00:02:00.574 dma/dpaa2: not in enabled drivers build config 00:02:00.574 dma/hisilicon: not in enabled drivers build config 00:02:00.574 dma/idxd: not in enabled drivers build config 00:02:00.574 dma/ioat: not in enabled drivers build config 00:02:00.574 dma/skeleton: not in enabled drivers build config 00:02:00.574 net/af_packet: not in enabled drivers build config 00:02:00.574 net/af_xdp: not in enabled drivers build config 00:02:00.574 net/ark: not in enabled drivers build config 00:02:00.574 net/atlantic: not in enabled drivers build config 00:02:00.574 net/avp: not in enabled drivers build config 00:02:00.574 net/axgbe: not in enabled drivers build config 00:02:00.574 net/bnx2x: not in enabled drivers build config 00:02:00.574 net/bnxt: not in enabled drivers build config 00:02:00.574 net/bonding: not in enabled drivers build config 00:02:00.574 net/cnxk: not in enabled drivers build config 00:02:00.574 net/cpfl: not in enabled drivers build config 00:02:00.574 net/cxgbe: not in enabled drivers build config 00:02:00.574 net/dpaa: not in enabled drivers build config 00:02:00.574 net/dpaa2: not in enabled drivers build config 00:02:00.574 net/e1000: not in enabled drivers build config 00:02:00.574 net/ena: not in enabled drivers build config 00:02:00.574 net/enetc: not in enabled drivers build config 00:02:00.574 net/enetfec: not in enabled drivers build config 00:02:00.574 net/enic: not in enabled drivers build config 00:02:00.574 net/failsafe: not in enabled drivers build config 00:02:00.574 net/fm10k: not in enabled drivers build config 00:02:00.574 net/gve: not in enabled drivers build config 00:02:00.574 net/hinic: not in enabled drivers build config 00:02:00.574 net/hns3: not in enabled drivers build config 00:02:00.574 net/i40e: not in enabled drivers build config 00:02:00.574 net/iavf: not in enabled drivers build config 00:02:00.574 net/ice: not in enabled drivers build config 00:02:00.574 net/idpf: not in enabled drivers build config 00:02:00.574 net/igc: not in enabled drivers build config 00:02:00.574 net/ionic: not in enabled drivers build config 00:02:00.574 net/ipn3ke: not in enabled drivers build config 00:02:00.574 net/ixgbe: not in enabled drivers build config 00:02:00.574 net/mana: not in enabled drivers build config 00:02:00.574 net/memif: not in enabled drivers build config 00:02:00.574 net/mlx4: not in enabled drivers build config 00:02:00.574 net/mlx5: not in enabled drivers build config 00:02:00.574 net/mvneta: not in enabled drivers build config 00:02:00.574 net/mvpp2: not in enabled drivers build config 00:02:00.574 net/netvsc: not in enabled drivers build config 00:02:00.574 net/nfb: not in enabled drivers build config 00:02:00.574 net/nfp: not in enabled drivers build config 00:02:00.574 net/ngbe: not in enabled drivers build config 00:02:00.574 net/null: not in enabled drivers build config 00:02:00.574 net/octeontx: not in enabled drivers build config 00:02:00.574 net/octeon_ep: not in enabled drivers build config 00:02:00.574 net/pcap: not in enabled drivers build config 00:02:00.574 net/pfe: not in enabled drivers build config 00:02:00.574 net/qede: not in enabled drivers build config 00:02:00.574 net/ring: not in enabled drivers build config 00:02:00.574 net/sfc: not in enabled drivers build config 00:02:00.574 net/softnic: not in enabled drivers build config 00:02:00.574 net/tap: not in enabled drivers build config 00:02:00.574 net/thunderx: not in enabled drivers build config 00:02:00.574 net/txgbe: not in enabled drivers build config 00:02:00.574 net/vdev_netvsc: not in enabled drivers build config 00:02:00.574 net/vhost: not in enabled drivers build config 00:02:00.574 net/virtio: not in enabled drivers build config 00:02:00.574 net/vmxnet3: not in enabled drivers build config 00:02:00.574 raw/*: missing internal dependency, "rawdev" 00:02:00.574 crypto/armv8: not in enabled drivers build config 00:02:00.574 crypto/bcmfs: not in enabled drivers build config 00:02:00.574 crypto/caam_jr: not in enabled drivers build config 00:02:00.574 crypto/ccp: not in enabled drivers build config 00:02:00.574 crypto/cnxk: not in enabled drivers build config 00:02:00.574 crypto/dpaa_sec: not in enabled drivers build config 00:02:00.574 crypto/dpaa2_sec: not in enabled drivers build config 00:02:00.574 crypto/ipsec_mb: not in enabled drivers build config 00:02:00.574 crypto/mlx5: not in enabled drivers build config 00:02:00.574 crypto/mvsam: not in enabled drivers build config 00:02:00.574 crypto/nitrox: not in enabled drivers build config 00:02:00.574 crypto/null: not in enabled drivers build config 00:02:00.574 crypto/octeontx: not in enabled drivers build config 00:02:00.574 crypto/openssl: not in enabled drivers build config 00:02:00.574 crypto/scheduler: not in enabled drivers build config 00:02:00.574 crypto/uadk: not in enabled drivers build config 00:02:00.574 crypto/virtio: not in enabled drivers build config 00:02:00.574 compress/isal: not in enabled drivers build config 00:02:00.574 compress/mlx5: not in enabled drivers build config 00:02:00.574 compress/octeontx: not in enabled drivers build config 00:02:00.574 compress/zlib: not in enabled drivers build config 00:02:00.574 regex/*: missing internal dependency, "regexdev" 00:02:00.574 ml/*: missing internal dependency, "mldev" 00:02:00.574 vdpa/ifc: not in enabled drivers build config 00:02:00.574 vdpa/mlx5: not in enabled drivers build config 00:02:00.574 vdpa/nfp: not in enabled drivers build config 00:02:00.574 vdpa/sfc: not in enabled drivers build config 00:02:00.574 event/*: missing internal dependency, "eventdev" 00:02:00.574 baseband/*: missing internal dependency, "bbdev" 00:02:00.574 gpu/*: missing internal dependency, "gpudev" 00:02:00.574 00:02:00.574 00:02:00.574 Build targets in project: 84 00:02:00.574 00:02:00.574 DPDK 23.11.0 00:02:00.574 00:02:00.574 User defined options 00:02:00.574 buildtype : debug 00:02:00.574 default_library : shared 00:02:00.574 libdir : lib 00:02:00.574 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:00.574 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:00.574 c_link_args : 00:02:00.574 cpu_instruction_set: native 00:02:00.574 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:02:00.574 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:02:00.574 enable_docs : false 00:02:00.574 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:00.574 enable_kmods : false 00:02:00.574 tests : false 00:02:00.574 00:02:00.574 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.574 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:00.574 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:00.574 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:00.574 [3/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.574 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:00.574 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:00.574 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:00.574 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:00.574 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:00.574 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:00.574 [10/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:00.574 [11/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:00.574 [12/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:00.574 [13/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:00.574 [14/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:00.574 [15/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:00.574 [16/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:00.574 [17/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:00.574 [18/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:00.574 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.574 [20/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:00.574 [21/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:00.574 [22/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:00.574 [23/264] Linking static target lib/librte_kvargs.a 00:02:00.574 [24/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:00.574 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:00.574 [26/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:00.574 [27/264] Linking static target lib/librte_pci.a 00:02:00.574 [28/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:00.575 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.575 [30/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:00.575 [31/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:00.575 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:00.575 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:00.575 [34/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:00.575 [35/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:00.575 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.575 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:00.575 [38/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:00.575 [39/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:00.575 [40/264] Linking static target lib/librte_log.a 00:02:00.575 [41/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:00.575 [42/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:00.575 [43/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:00.575 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:00.575 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:00.575 [46/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.575 [47/264] Linking static target lib/librte_ring.a 00:02:00.575 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:00.575 [49/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:00.575 [50/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:00.575 [51/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:00.575 [52/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:00.575 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.575 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:00.575 [55/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:00.575 [56/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.575 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:00.575 [58/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:00.575 [59/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:00.575 [60/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:00.575 [61/264] Linking static target lib/librte_telemetry.a 00:02:00.575 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:00.575 [63/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:00.575 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:00.575 [65/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:00.575 [66/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:00.575 [67/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:00.575 [68/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:00.575 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:00.575 [70/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:00.575 [71/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:00.575 [72/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:00.575 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:00.575 [74/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:00.575 [75/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.575 [76/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:00.575 [77/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:00.575 [78/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:00.575 [79/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:00.575 [80/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:00.575 [81/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:00.575 [82/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:00.575 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:00.575 [84/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:00.575 [85/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:00.575 [86/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:00.575 [87/264] Linking static target lib/librte_meter.a 00:02:00.575 [88/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:00.575 [89/264] Linking static target lib/librte_timer.a 00:02:00.575 [90/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:00.575 [91/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:00.575 [92/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:00.575 [93/264] Linking static target lib/librte_cmdline.a 00:02:00.575 [94/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:00.575 [95/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.575 [96/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:00.575 [97/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:00.575 [98/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:00.575 [99/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:00.575 [100/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:00.575 [101/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:00.575 [102/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:00.575 [103/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:00.575 [104/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:00.575 [105/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:00.575 [106/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:00.575 [107/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:00.575 [108/264] Linking static target lib/librte_compressdev.a 00:02:00.575 [109/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:00.575 [110/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:00.837 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:00.837 [112/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.837 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:00.837 [114/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:00.837 [115/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:00.837 [116/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:00.837 [117/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:00.837 [118/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:00.837 [119/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:00.837 [120/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:00.837 [121/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:00.837 [122/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:00.837 [123/264] Linking static target lib/librte_rcu.a 00:02:00.837 [124/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:00.837 [125/264] Linking static target lib/librte_security.a 00:02:00.837 [126/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:00.837 [127/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:00.837 [128/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:00.837 [129/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:00.837 [130/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:00.837 [131/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:00.837 [132/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:00.837 [133/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:00.837 [134/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:00.837 [135/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:00.837 [136/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.837 [137/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:00.837 [138/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:00.837 [139/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:00.837 [140/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:00.837 [141/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:00.837 [142/264] Linking static target lib/librte_dmadev.a 00:02:00.837 [143/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:00.837 [144/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.837 [145/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:00.837 [146/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:00.837 [147/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.837 [148/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:00.837 [149/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:00.837 [150/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:00.837 [151/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:00.837 [152/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:00.837 [153/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:00.837 [154/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:00.837 [155/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:00.837 [156/264] Linking static target drivers/librte_mempool_ring.a 00:02:00.837 [157/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:00.837 [158/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:00.837 [159/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:00.837 [160/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:00.837 [161/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:00.837 [162/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:00.837 [163/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:00.837 [164/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:00.837 [165/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:00.837 [166/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:01.099 [167/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:01.099 [168/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:01.099 [169/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.099 [170/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:01.099 [171/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:01.099 [172/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:01.099 [173/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:01.099 [174/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.099 [175/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:01.099 [176/264] Linking static target lib/librte_mempool.a 00:02:01.099 [177/264] Linking static target drivers/librte_bus_vdev.a 00:02:01.099 [178/264] Linking static target lib/librte_power.a 00:02:01.099 [179/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:01.099 [180/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:01.099 [181/264] Linking static target lib/librte_reorder.a 00:02:01.099 [182/264] Linking static target lib/librte_eal.a 00:02:01.099 [183/264] Linking static target lib/librte_net.a 00:02:01.099 [184/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:01.099 [185/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.099 [186/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.099 [187/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.099 [188/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:01.099 [189/264] Linking target lib/librte_log.so.24.0 00:02:01.099 [190/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:01.099 [191/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:01.099 [192/264] Linking static target lib/librte_cryptodev.a 00:02:01.099 [193/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.099 [194/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:01.099 [195/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:01.099 [196/264] Linking static target lib/librte_mbuf.a 00:02:01.099 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:01.099 [198/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:01.099 [199/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.099 [200/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.099 [201/264] Linking static target drivers/librte_bus_pci.a 00:02:01.099 [202/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:01.361 [203/264] Linking target lib/librte_kvargs.so.24.0 00:02:01.361 [204/264] Linking target lib/librte_telemetry.so.24.0 00:02:01.361 [205/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:01.361 [206/264] Linking static target lib/librte_hash.a 00:02:01.361 [207/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.361 [208/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.361 [209/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.361 [210/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.361 [211/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:01.361 [212/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.361 [213/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:01.622 [214/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.622 [215/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:01.883 [216/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:01.883 [217/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.883 [218/264] Linking static target lib/librte_ethdev.a 00:02:01.883 [219/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.883 [220/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.883 [221/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.144 [222/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.144 [223/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.404 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:02.404 [225/264] Linking static target lib/librte_vhost.a 00:02:03.387 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.773 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.363 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.308 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.308 [230/264] Linking target lib/librte_eal.so.24.0 00:02:12.569 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:12.569 [232/264] Linking target lib/librte_pci.so.24.0 00:02:12.569 [233/264] Linking target lib/librte_dmadev.so.24.0 00:02:12.569 [234/264] Linking target lib/librte_meter.so.24.0 00:02:12.569 [235/264] Linking target lib/librte_ring.so.24.0 00:02:12.569 [236/264] Linking target lib/librte_timer.so.24.0 00:02:12.569 [237/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:12.569 [238/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:12.569 [239/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:12.569 [240/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:12.569 [241/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:12.569 [242/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:12.830 [243/264] Linking target lib/librte_mempool.so.24.0 00:02:12.830 [244/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:12.830 [245/264] Linking target lib/librte_rcu.so.24.0 00:02:12.830 [246/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:12.830 [247/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:12.830 [248/264] Linking target lib/librte_mbuf.so.24.0 00:02:12.830 [249/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:13.091 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:13.091 [251/264] Linking target lib/librte_cryptodev.so.24.0 00:02:13.091 [252/264] Linking target lib/librte_reorder.so.24.0 00:02:13.091 [253/264] Linking target lib/librte_compressdev.so.24.0 00:02:13.091 [254/264] Linking target lib/librte_net.so.24.0 00:02:13.091 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:13.091 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:13.091 [257/264] Linking target lib/librte_hash.so.24.0 00:02:13.353 [258/264] Linking target lib/librte_security.so.24.0 00:02:13.353 [259/264] Linking target lib/librte_cmdline.so.24.0 00:02:13.353 [260/264] Linking target lib/librte_ethdev.so.24.0 00:02:13.353 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:13.353 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:13.353 [263/264] Linking target lib/librte_power.so.24.0 00:02:13.353 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:13.353 INFO: autodetecting backend as ninja 00:02:13.353 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:14.297 CC lib/log/log.o 00:02:14.297 CC lib/ut_mock/mock.o 00:02:14.297 CC lib/log/log_flags.o 00:02:14.297 CC lib/log/log_deprecated.o 00:02:14.297 CC lib/ut/ut.o 00:02:14.558 LIB libspdk_ut_mock.a 00:02:14.558 LIB libspdk_log.a 00:02:14.558 LIB libspdk_ut.a 00:02:14.558 SO libspdk_ut_mock.so.5.0 00:02:14.558 SO libspdk_log.so.6.1 00:02:14.558 SO libspdk_ut.so.1.0 00:02:14.558 SYMLINK libspdk_ut_mock.so 00:02:14.558 SYMLINK libspdk_log.so 00:02:14.558 SYMLINK libspdk_ut.so 00:02:14.820 CC lib/dma/dma.o 00:02:14.820 CC lib/util/base64.o 00:02:14.820 CC lib/util/bit_array.o 00:02:14.820 CC lib/util/cpuset.o 00:02:14.820 CC lib/util/crc16.o 00:02:14.820 CC lib/util/crc32.o 00:02:14.820 CC lib/util/crc32c.o 00:02:14.820 CC lib/util/crc32_ieee.o 00:02:14.820 CXX lib/trace_parser/trace.o 00:02:14.820 CC lib/util/crc64.o 00:02:14.820 CC lib/util/dif.o 00:02:14.820 CC lib/util/fd.o 00:02:14.820 CC lib/util/file.o 00:02:14.820 CC lib/util/hexlify.o 00:02:14.820 CC lib/ioat/ioat.o 00:02:14.820 CC lib/util/iov.o 00:02:14.820 CC lib/util/math.o 00:02:14.820 CC lib/util/pipe.o 00:02:14.820 CC lib/util/uuid.o 00:02:14.820 CC lib/util/strerror_tls.o 00:02:14.820 CC lib/util/string.o 00:02:14.820 CC lib/util/fd_group.o 00:02:14.820 CC lib/util/xor.o 00:02:14.820 CC lib/util/zipf.o 00:02:14.820 CC lib/vfio_user/host/vfio_user_pci.o 00:02:14.820 CC lib/vfio_user/host/vfio_user.o 00:02:15.081 LIB libspdk_dma.a 00:02:15.081 SO libspdk_dma.so.3.0 00:02:15.081 SYMLINK libspdk_dma.so 00:02:15.081 LIB libspdk_vfio_user.a 00:02:15.081 LIB libspdk_ioat.a 00:02:15.081 SO libspdk_vfio_user.so.4.0 00:02:15.342 SO libspdk_ioat.so.6.0 00:02:15.342 LIB libspdk_util.a 00:02:15.342 SYMLINK libspdk_vfio_user.so 00:02:15.342 SYMLINK libspdk_ioat.so 00:02:15.342 SO libspdk_util.so.8.0 00:02:15.342 SYMLINK libspdk_util.so 00:02:15.605 LIB libspdk_trace_parser.a 00:02:15.605 SO libspdk_trace_parser.so.4.0 00:02:15.605 CC lib/json/json_parse.o 00:02:15.605 CC lib/conf/conf.o 00:02:15.605 CC lib/json/json_write.o 00:02:15.605 CC lib/json/json_util.o 00:02:15.605 CC lib/idxd/idxd_user.o 00:02:15.605 CC lib/idxd/idxd.o 00:02:15.605 CC lib/rdma/common.o 00:02:15.605 CC lib/rdma/rdma_verbs.o 00:02:15.605 CC lib/vmd/vmd.o 00:02:15.605 CC lib/vmd/led.o 00:02:15.605 CC lib/idxd/idxd_kernel.o 00:02:15.605 CC lib/env_dpdk/env.o 00:02:15.605 CC lib/env_dpdk/memory.o 00:02:15.605 CC lib/env_dpdk/pci.o 00:02:15.605 CC lib/env_dpdk/threads.o 00:02:15.605 CC lib/env_dpdk/init.o 00:02:15.605 CC lib/env_dpdk/pci_ioat.o 00:02:15.605 CC lib/env_dpdk/pci_virtio.o 00:02:15.605 CC lib/env_dpdk/pci_vmd.o 00:02:15.605 CC lib/env_dpdk/pci_idxd.o 00:02:15.605 CC lib/env_dpdk/pci_event.o 00:02:15.605 CC lib/env_dpdk/sigbus_handler.o 00:02:15.605 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:15.605 CC lib/env_dpdk/pci_dpdk.o 00:02:15.605 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:15.866 SYMLINK libspdk_trace_parser.so 00:02:15.867 LIB libspdk_conf.a 00:02:15.867 SO libspdk_conf.so.5.0 00:02:15.867 LIB libspdk_json.a 00:02:15.867 LIB libspdk_rdma.a 00:02:16.128 SYMLINK libspdk_conf.so 00:02:16.128 SO libspdk_json.so.5.1 00:02:16.128 SO libspdk_rdma.so.5.0 00:02:16.128 SYMLINK libspdk_json.so 00:02:16.128 SYMLINK libspdk_rdma.so 00:02:16.128 LIB libspdk_idxd.a 00:02:16.128 SO libspdk_idxd.so.11.0 00:02:16.390 LIB libspdk_vmd.a 00:02:16.390 SYMLINK libspdk_idxd.so 00:02:16.390 SO libspdk_vmd.so.5.0 00:02:16.390 CC lib/jsonrpc/jsonrpc_server.o 00:02:16.390 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:16.390 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:16.390 CC lib/jsonrpc/jsonrpc_client.o 00:02:16.390 SYMLINK libspdk_vmd.so 00:02:16.652 LIB libspdk_jsonrpc.a 00:02:16.652 SO libspdk_jsonrpc.so.5.1 00:02:16.652 SYMLINK libspdk_jsonrpc.so 00:02:16.913 LIB libspdk_env_dpdk.a 00:02:16.913 CC lib/rpc/rpc.o 00:02:16.913 SO libspdk_env_dpdk.so.13.0 00:02:17.175 LIB libspdk_rpc.a 00:02:17.175 SYMLINK libspdk_env_dpdk.so 00:02:17.175 SO libspdk_rpc.so.5.0 00:02:17.175 SYMLINK libspdk_rpc.so 00:02:17.436 CC lib/notify/notify.o 00:02:17.436 CC lib/trace/trace.o 00:02:17.436 CC lib/notify/notify_rpc.o 00:02:17.436 CC lib/trace/trace_flags.o 00:02:17.436 CC lib/trace/trace_rpc.o 00:02:17.436 CC lib/sock/sock.o 00:02:17.436 CC lib/sock/sock_rpc.o 00:02:17.698 LIB libspdk_notify.a 00:02:17.698 SO libspdk_notify.so.5.0 00:02:17.698 LIB libspdk_trace.a 00:02:17.698 SO libspdk_trace.so.9.0 00:02:17.698 SYMLINK libspdk_notify.so 00:02:17.698 SYMLINK libspdk_trace.so 00:02:17.698 LIB libspdk_sock.a 00:02:17.960 SO libspdk_sock.so.8.0 00:02:17.960 SYMLINK libspdk_sock.so 00:02:17.960 CC lib/thread/thread.o 00:02:17.960 CC lib/thread/iobuf.o 00:02:18.222 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:18.222 CC lib/nvme/nvme_ctrlr.o 00:02:18.222 CC lib/nvme/nvme_fabric.o 00:02:18.222 CC lib/nvme/nvme_ns_cmd.o 00:02:18.222 CC lib/nvme/nvme_ns.o 00:02:18.222 CC lib/nvme/nvme_pcie_common.o 00:02:18.222 CC lib/nvme/nvme_pcie.o 00:02:18.222 CC lib/nvme/nvme_qpair.o 00:02:18.222 CC lib/nvme/nvme.o 00:02:18.222 CC lib/nvme/nvme_quirks.o 00:02:18.222 CC lib/nvme/nvme_transport.o 00:02:18.222 CC lib/nvme/nvme_discovery.o 00:02:18.222 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:18.222 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:18.222 CC lib/nvme/nvme_tcp.o 00:02:18.222 CC lib/nvme/nvme_opal.o 00:02:18.222 CC lib/nvme/nvme_io_msg.o 00:02:18.222 CC lib/nvme/nvme_poll_group.o 00:02:18.222 CC lib/nvme/nvme_zns.o 00:02:18.222 CC lib/nvme/nvme_cuse.o 00:02:18.222 CC lib/nvme/nvme_vfio_user.o 00:02:18.222 CC lib/nvme/nvme_rdma.o 00:02:19.165 LIB libspdk_thread.a 00:02:19.427 SO libspdk_thread.so.9.0 00:02:19.427 SYMLINK libspdk_thread.so 00:02:19.688 CC lib/accel/accel.o 00:02:19.688 CC lib/accel/accel_rpc.o 00:02:19.688 CC lib/virtio/virtio.o 00:02:19.688 CC lib/accel/accel_sw.o 00:02:19.688 CC lib/virtio/virtio_vhost_user.o 00:02:19.688 CC lib/virtio/virtio_vfio_user.o 00:02:19.688 CC lib/virtio/virtio_pci.o 00:02:19.688 CC lib/blob/blobstore.o 00:02:19.688 CC lib/blob/request.o 00:02:19.688 CC lib/init/json_config.o 00:02:19.688 CC lib/blob/zeroes.o 00:02:19.688 CC lib/blob/blob_bs_dev.o 00:02:19.688 CC lib/init/subsystem.o 00:02:19.688 CC lib/init/subsystem_rpc.o 00:02:19.688 CC lib/init/rpc.o 00:02:19.950 LIB libspdk_init.a 00:02:19.950 SO libspdk_init.so.4.0 00:02:19.950 LIB libspdk_virtio.a 00:02:19.950 LIB libspdk_nvme.a 00:02:19.950 SO libspdk_virtio.so.6.0 00:02:19.950 SYMLINK libspdk_init.so 00:02:19.950 SYMLINK libspdk_virtio.so 00:02:19.950 SO libspdk_nvme.so.12.0 00:02:20.212 CC lib/event/app.o 00:02:20.212 CC lib/event/reactor.o 00:02:20.212 CC lib/event/app_rpc.o 00:02:20.212 CC lib/event/log_rpc.o 00:02:20.212 CC lib/event/scheduler_static.o 00:02:20.212 SYMLINK libspdk_nvme.so 00:02:20.473 LIB libspdk_accel.a 00:02:20.473 SO libspdk_accel.so.14.0 00:02:20.473 LIB libspdk_event.a 00:02:20.473 SYMLINK libspdk_accel.so 00:02:20.473 SO libspdk_event.so.12.0 00:02:20.735 SYMLINK libspdk_event.so 00:02:20.735 CC lib/bdev/bdev.o 00:02:20.735 CC lib/bdev/bdev_rpc.o 00:02:20.735 CC lib/bdev/bdev_zone.o 00:02:20.735 CC lib/bdev/part.o 00:02:20.735 CC lib/bdev/scsi_nvme.o 00:02:22.163 LIB libspdk_blob.a 00:02:22.163 SO libspdk_blob.so.10.1 00:02:22.163 SYMLINK libspdk_blob.so 00:02:22.163 CC lib/lvol/lvol.o 00:02:22.163 CC lib/blobfs/blobfs.o 00:02:22.163 CC lib/blobfs/tree.o 00:02:23.107 LIB libspdk_bdev.a 00:02:23.107 LIB libspdk_blobfs.a 00:02:23.107 LIB libspdk_lvol.a 00:02:23.107 SO libspdk_bdev.so.14.0 00:02:23.107 SO libspdk_blobfs.so.9.0 00:02:23.107 SO libspdk_lvol.so.9.1 00:02:23.107 SYMLINK libspdk_lvol.so 00:02:23.107 SYMLINK libspdk_bdev.so 00:02:23.107 SYMLINK libspdk_blobfs.so 00:02:23.368 CC lib/nvmf/ctrlr.o 00:02:23.368 CC lib/nvmf/ctrlr_discovery.o 00:02:23.368 CC lib/nvmf/ctrlr_bdev.o 00:02:23.368 CC lib/nvmf/subsystem.o 00:02:23.368 CC lib/nvmf/nvmf.o 00:02:23.368 CC lib/nvmf/nvmf_rpc.o 00:02:23.368 CC lib/nvmf/transport.o 00:02:23.368 CC lib/nvmf/tcp.o 00:02:23.368 CC lib/nvmf/rdma.o 00:02:23.368 CC lib/ftl/ftl_core.o 00:02:23.368 CC lib/ublk/ublk.o 00:02:23.368 CC lib/ublk/ublk_rpc.o 00:02:23.368 CC lib/ftl/ftl_init.o 00:02:23.368 CC lib/scsi/dev.o 00:02:23.368 CC lib/ftl/ftl_layout.o 00:02:23.368 CC lib/ftl/ftl_debug.o 00:02:23.368 CC lib/scsi/lun.o 00:02:23.368 CC lib/scsi/port.o 00:02:23.368 CC lib/ftl/ftl_io.o 00:02:23.368 CC lib/nbd/nbd.o 00:02:23.368 CC lib/ftl/ftl_l2p.o 00:02:23.368 CC lib/scsi/scsi.o 00:02:23.368 CC lib/ftl/ftl_sb.o 00:02:23.368 CC lib/scsi/scsi_bdev.o 00:02:23.368 CC lib/nbd/nbd_rpc.o 00:02:23.368 CC lib/ftl/ftl_l2p_flat.o 00:02:23.368 CC lib/scsi/scsi_pr.o 00:02:23.368 CC lib/scsi/scsi_rpc.o 00:02:23.368 CC lib/ftl/ftl_nv_cache.o 00:02:23.368 CC lib/scsi/task.o 00:02:23.368 CC lib/ftl/ftl_band.o 00:02:23.368 CC lib/ftl/ftl_band_ops.o 00:02:23.368 CC lib/ftl/ftl_writer.o 00:02:23.368 CC lib/ftl/ftl_rq.o 00:02:23.368 CC lib/ftl/ftl_reloc.o 00:02:23.368 CC lib/ftl/ftl_l2p_cache.o 00:02:23.368 CC lib/ftl/ftl_p2l.o 00:02:23.368 CC lib/ftl/mngt/ftl_mngt.o 00:02:23.368 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:23.368 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:23.368 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:23.368 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:23.368 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:23.368 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:23.368 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:23.368 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:23.368 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:23.368 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:23.368 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:23.368 CC lib/ftl/utils/ftl_md.o 00:02:23.368 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:23.368 CC lib/ftl/utils/ftl_conf.o 00:02:23.368 CC lib/ftl/utils/ftl_mempool.o 00:02:23.368 CC lib/ftl/utils/ftl_bitmap.o 00:02:23.368 CC lib/ftl/utils/ftl_property.o 00:02:23.368 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:23.368 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:23.368 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:23.368 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:23.368 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:23.368 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:23.368 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:23.368 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:23.368 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:23.368 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:23.368 CC lib/ftl/base/ftl_base_dev.o 00:02:23.368 CC lib/ftl/base/ftl_base_bdev.o 00:02:23.368 CC lib/ftl/ftl_trace.o 00:02:23.938 LIB libspdk_nbd.a 00:02:23.938 SO libspdk_nbd.so.6.0 00:02:23.938 LIB libspdk_scsi.a 00:02:23.938 SYMLINK libspdk_nbd.so 00:02:23.938 LIB libspdk_ublk.a 00:02:23.938 SO libspdk_scsi.so.8.0 00:02:23.938 SO libspdk_ublk.so.2.0 00:02:23.938 SYMLINK libspdk_ublk.so 00:02:24.199 SYMLINK libspdk_scsi.so 00:02:24.199 LIB libspdk_ftl.a 00:02:24.199 CC lib/iscsi/conn.o 00:02:24.199 CC lib/iscsi/init_grp.o 00:02:24.199 CC lib/iscsi/iscsi.o 00:02:24.199 CC lib/iscsi/md5.o 00:02:24.199 CC lib/iscsi/param.o 00:02:24.199 CC lib/iscsi/portal_grp.o 00:02:24.199 CC lib/iscsi/tgt_node.o 00:02:24.199 CC lib/iscsi/iscsi_subsystem.o 00:02:24.199 CC lib/iscsi/iscsi_rpc.o 00:02:24.199 CC lib/iscsi/task.o 00:02:24.199 CC lib/vhost/vhost_rpc.o 00:02:24.199 CC lib/vhost/vhost.o 00:02:24.199 CC lib/vhost/vhost_blk.o 00:02:24.199 CC lib/vhost/vhost_scsi.o 00:02:24.199 CC lib/vhost/rte_vhost_user.o 00:02:24.199 SO libspdk_ftl.so.8.0 00:02:24.772 SYMLINK libspdk_ftl.so 00:02:25.033 LIB libspdk_nvmf.a 00:02:25.294 SO libspdk_nvmf.so.17.0 00:02:25.294 LIB libspdk_vhost.a 00:02:25.294 SO libspdk_vhost.so.7.1 00:02:25.294 SYMLINK libspdk_nvmf.so 00:02:25.294 SYMLINK libspdk_vhost.so 00:02:25.294 LIB libspdk_iscsi.a 00:02:25.555 SO libspdk_iscsi.so.7.0 00:02:25.555 SYMLINK libspdk_iscsi.so 00:02:26.129 CC module/env_dpdk/env_dpdk_rpc.o 00:02:26.129 CC module/accel/error/accel_error.o 00:02:26.129 CC module/accel/error/accel_error_rpc.o 00:02:26.129 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:26.129 CC module/scheduler/gscheduler/gscheduler.o 00:02:26.129 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:26.129 CC module/blob/bdev/blob_bdev.o 00:02:26.129 CC module/accel/ioat/accel_ioat.o 00:02:26.129 CC module/accel/ioat/accel_ioat_rpc.o 00:02:26.129 CC module/accel/dsa/accel_dsa.o 00:02:26.129 CC module/accel/iaa/accel_iaa.o 00:02:26.129 CC module/accel/dsa/accel_dsa_rpc.o 00:02:26.129 CC module/accel/iaa/accel_iaa_rpc.o 00:02:26.129 CC module/sock/posix/posix.o 00:02:26.129 LIB libspdk_env_dpdk_rpc.a 00:02:26.129 SO libspdk_env_dpdk_rpc.so.5.0 00:02:26.390 LIB libspdk_scheduler_dpdk_governor.a 00:02:26.390 LIB libspdk_accel_error.a 00:02:26.390 SYMLINK libspdk_env_dpdk_rpc.so 00:02:26.390 LIB libspdk_scheduler_gscheduler.a 00:02:26.390 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:26.390 SO libspdk_accel_error.so.1.0 00:02:26.390 LIB libspdk_scheduler_dynamic.a 00:02:26.390 LIB libspdk_accel_ioat.a 00:02:26.390 SO libspdk_scheduler_gscheduler.so.3.0 00:02:26.390 LIB libspdk_accel_iaa.a 00:02:26.390 SO libspdk_accel_ioat.so.5.0 00:02:26.390 SO libspdk_scheduler_dynamic.so.3.0 00:02:26.390 SYMLINK libspdk_accel_error.so 00:02:26.390 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:26.390 LIB libspdk_accel_dsa.a 00:02:26.390 SYMLINK libspdk_scheduler_gscheduler.so 00:02:26.390 LIB libspdk_blob_bdev.a 00:02:26.390 SO libspdk_accel_iaa.so.2.0 00:02:26.390 SYMLINK libspdk_accel_ioat.so 00:02:26.390 SO libspdk_accel_dsa.so.4.0 00:02:26.390 SO libspdk_blob_bdev.so.10.1 00:02:26.390 SYMLINK libspdk_scheduler_dynamic.so 00:02:26.390 SYMLINK libspdk_accel_iaa.so 00:02:26.390 SYMLINK libspdk_blob_bdev.so 00:02:26.390 SYMLINK libspdk_accel_dsa.so 00:02:26.651 LIB libspdk_sock_posix.a 00:02:26.911 SO libspdk_sock_posix.so.5.0 00:02:26.911 CC module/bdev/delay/vbdev_delay.o 00:02:26.911 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:26.911 CC module/bdev/gpt/gpt.o 00:02:26.911 CC module/bdev/error/vbdev_error.o 00:02:26.911 CC module/blobfs/bdev/blobfs_bdev.o 00:02:26.911 CC module/bdev/error/vbdev_error_rpc.o 00:02:26.911 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:26.911 CC module/bdev/gpt/vbdev_gpt.o 00:02:26.911 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:26.911 CC module/bdev/malloc/bdev_malloc.o 00:02:26.911 CC module/bdev/passthru/vbdev_passthru.o 00:02:26.911 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:26.911 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:26.911 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:26.911 CC module/bdev/lvol/vbdev_lvol.o 00:02:26.911 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:26.911 CC module/bdev/iscsi/bdev_iscsi.o 00:02:26.911 CC module/bdev/null/bdev_null.o 00:02:26.911 CC module/bdev/null/bdev_null_rpc.o 00:02:26.911 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:26.911 CC module/bdev/ftl/bdev_ftl.o 00:02:26.911 CC module/bdev/split/vbdev_split_rpc.o 00:02:26.911 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:26.911 CC module/bdev/split/vbdev_split.o 00:02:26.911 CC module/bdev/raid/bdev_raid.o 00:02:26.911 CC module/bdev/nvme/bdev_nvme.o 00:02:26.911 CC module/bdev/raid/bdev_raid_rpc.o 00:02:26.911 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:26.911 CC module/bdev/raid/bdev_raid_sb.o 00:02:26.911 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:26.911 CC module/bdev/nvme/nvme_rpc.o 00:02:26.911 CC module/bdev/raid/raid0.o 00:02:26.911 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:26.911 CC module/bdev/raid/raid1.o 00:02:26.911 CC module/bdev/nvme/bdev_mdns_client.o 00:02:26.911 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:26.911 CC module/bdev/aio/bdev_aio.o 00:02:26.911 CC module/bdev/raid/concat.o 00:02:26.911 CC module/bdev/aio/bdev_aio_rpc.o 00:02:26.911 CC module/bdev/nvme/vbdev_opal.o 00:02:26.911 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:26.911 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:26.911 SYMLINK libspdk_sock_posix.so 00:02:27.172 LIB libspdk_blobfs_bdev.a 00:02:27.172 SO libspdk_blobfs_bdev.so.5.0 00:02:27.172 LIB libspdk_bdev_split.a 00:02:27.172 LIB libspdk_bdev_error.a 00:02:27.172 LIB libspdk_bdev_gpt.a 00:02:27.172 LIB libspdk_bdev_passthru.a 00:02:27.172 SO libspdk_bdev_split.so.5.0 00:02:27.172 SO libspdk_bdev_error.so.5.0 00:02:27.172 LIB libspdk_bdev_null.a 00:02:27.172 SYMLINK libspdk_blobfs_bdev.so 00:02:27.172 SO libspdk_bdev_gpt.so.5.0 00:02:27.172 SO libspdk_bdev_passthru.so.5.0 00:02:27.172 LIB libspdk_bdev_ftl.a 00:02:27.172 LIB libspdk_bdev_zone_block.a 00:02:27.172 LIB libspdk_bdev_delay.a 00:02:27.172 LIB libspdk_bdev_malloc.a 00:02:27.172 SO libspdk_bdev_null.so.5.0 00:02:27.172 LIB libspdk_bdev_aio.a 00:02:27.172 SYMLINK libspdk_bdev_split.so 00:02:27.172 SO libspdk_bdev_zone_block.so.5.0 00:02:27.172 SYMLINK libspdk_bdev_error.so 00:02:27.172 SO libspdk_bdev_ftl.so.5.0 00:02:27.172 LIB libspdk_bdev_iscsi.a 00:02:27.172 SYMLINK libspdk_bdev_passthru.so 00:02:27.172 SYMLINK libspdk_bdev_gpt.so 00:02:27.172 SO libspdk_bdev_delay.so.5.0 00:02:27.172 SO libspdk_bdev_aio.so.5.0 00:02:27.172 SO libspdk_bdev_malloc.so.5.0 00:02:27.172 SYMLINK libspdk_bdev_null.so 00:02:27.172 SO libspdk_bdev_iscsi.so.5.0 00:02:27.172 SYMLINK libspdk_bdev_zone_block.so 00:02:27.172 SYMLINK libspdk_bdev_ftl.so 00:02:27.432 SYMLINK libspdk_bdev_aio.so 00:02:27.432 SYMLINK libspdk_bdev_delay.so 00:02:27.432 SYMLINK libspdk_bdev_malloc.so 00:02:27.432 LIB libspdk_bdev_lvol.a 00:02:27.433 SYMLINK libspdk_bdev_iscsi.so 00:02:27.433 LIB libspdk_bdev_virtio.a 00:02:27.433 SO libspdk_bdev_lvol.so.5.0 00:02:27.433 SO libspdk_bdev_virtio.so.5.0 00:02:27.433 SYMLINK libspdk_bdev_lvol.so 00:02:27.433 SYMLINK libspdk_bdev_virtio.so 00:02:27.693 LIB libspdk_bdev_raid.a 00:02:27.693 SO libspdk_bdev_raid.so.5.0 00:02:27.693 SYMLINK libspdk_bdev_raid.so 00:02:28.635 LIB libspdk_bdev_nvme.a 00:02:28.635 SO libspdk_bdev_nvme.so.6.0 00:02:28.896 SYMLINK libspdk_bdev_nvme.so 00:02:29.469 CC module/event/subsystems/sock/sock.o 00:02:29.469 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:29.469 CC module/event/subsystems/vmd/vmd.o 00:02:29.469 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:29.469 CC module/event/subsystems/scheduler/scheduler.o 00:02:29.469 CC module/event/subsystems/iobuf/iobuf.o 00:02:29.469 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:29.469 LIB libspdk_event_sock.a 00:02:29.469 LIB libspdk_event_vhost_blk.a 00:02:29.469 LIB libspdk_event_scheduler.a 00:02:29.469 LIB libspdk_event_vmd.a 00:02:29.469 LIB libspdk_event_iobuf.a 00:02:29.469 SO libspdk_event_vhost_blk.so.2.0 00:02:29.469 SO libspdk_event_sock.so.4.0 00:02:29.469 SO libspdk_event_scheduler.so.3.0 00:02:29.469 SO libspdk_event_vmd.so.5.0 00:02:29.469 SO libspdk_event_iobuf.so.2.0 00:02:29.469 SYMLINK libspdk_event_vhost_blk.so 00:02:29.469 SYMLINK libspdk_event_sock.so 00:02:29.469 SYMLINK libspdk_event_scheduler.so 00:02:29.730 SYMLINK libspdk_event_vmd.so 00:02:29.730 SYMLINK libspdk_event_iobuf.so 00:02:29.730 CC module/event/subsystems/accel/accel.o 00:02:29.991 LIB libspdk_event_accel.a 00:02:29.991 SO libspdk_event_accel.so.5.0 00:02:29.991 SYMLINK libspdk_event_accel.so 00:02:30.252 CC module/event/subsystems/bdev/bdev.o 00:02:30.513 LIB libspdk_event_bdev.a 00:02:30.513 SO libspdk_event_bdev.so.5.0 00:02:30.513 SYMLINK libspdk_event_bdev.so 00:02:30.775 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:30.775 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:30.775 CC module/event/subsystems/scsi/scsi.o 00:02:30.775 CC module/event/subsystems/nbd/nbd.o 00:02:30.775 CC module/event/subsystems/ublk/ublk.o 00:02:31.036 LIB libspdk_event_nbd.a 00:02:31.036 LIB libspdk_event_ublk.a 00:02:31.036 LIB libspdk_event_scsi.a 00:02:31.036 SO libspdk_event_nbd.so.5.0 00:02:31.036 SO libspdk_event_ublk.so.2.0 00:02:31.036 SO libspdk_event_scsi.so.5.0 00:02:31.036 LIB libspdk_event_nvmf.a 00:02:31.036 SO libspdk_event_nvmf.so.5.0 00:02:31.036 SYMLINK libspdk_event_nbd.so 00:02:31.036 SYMLINK libspdk_event_ublk.so 00:02:31.036 SYMLINK libspdk_event_scsi.so 00:02:31.036 SYMLINK libspdk_event_nvmf.so 00:02:31.296 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:31.296 CC module/event/subsystems/iscsi/iscsi.o 00:02:31.558 LIB libspdk_event_iscsi.a 00:02:31.558 LIB libspdk_event_vhost_scsi.a 00:02:31.558 SO libspdk_event_iscsi.so.5.0 00:02:31.558 SO libspdk_event_vhost_scsi.so.2.0 00:02:31.558 SYMLINK libspdk_event_iscsi.so 00:02:31.558 SYMLINK libspdk_event_vhost_scsi.so 00:02:31.558 SO libspdk.so.5.0 00:02:31.558 SYMLINK libspdk.so 00:02:32.144 CC app/trace_record/trace_record.o 00:02:32.144 CC app/spdk_top/spdk_top.o 00:02:32.144 TEST_HEADER include/spdk/accel_module.h 00:02:32.144 TEST_HEADER include/spdk/accel.h 00:02:32.144 TEST_HEADER include/spdk/barrier.h 00:02:32.144 TEST_HEADER include/spdk/base64.h 00:02:32.144 TEST_HEADER include/spdk/assert.h 00:02:32.144 TEST_HEADER include/spdk/bdev_module.h 00:02:32.144 CC app/spdk_nvme_identify/identify.o 00:02:32.144 TEST_HEADER include/spdk/bdev.h 00:02:32.144 CC app/spdk_nvme_perf/perf.o 00:02:32.144 CXX app/trace/trace.o 00:02:32.144 TEST_HEADER include/spdk/bdev_zone.h 00:02:32.144 TEST_HEADER include/spdk/bit_pool.h 00:02:32.144 TEST_HEADER include/spdk/blob_bdev.h 00:02:32.144 TEST_HEADER include/spdk/bit_array.h 00:02:32.144 TEST_HEADER include/spdk/blobfs.h 00:02:32.144 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:32.144 TEST_HEADER include/spdk/blob.h 00:02:32.144 TEST_HEADER include/spdk/conf.h 00:02:32.144 TEST_HEADER include/spdk/cpuset.h 00:02:32.144 TEST_HEADER include/spdk/config.h 00:02:32.144 TEST_HEADER include/spdk/crc16.h 00:02:32.144 TEST_HEADER include/spdk/dif.h 00:02:32.144 CC app/spdk_lspci/spdk_lspci.o 00:02:32.144 TEST_HEADER include/spdk/dma.h 00:02:32.144 TEST_HEADER include/spdk/endian.h 00:02:32.144 TEST_HEADER include/spdk/crc32.h 00:02:32.144 TEST_HEADER include/spdk/env_dpdk.h 00:02:32.144 TEST_HEADER include/spdk/crc64.h 00:02:32.144 TEST_HEADER include/spdk/event.h 00:02:32.144 TEST_HEADER include/spdk/fd_group.h 00:02:32.144 TEST_HEADER include/spdk/fd.h 00:02:32.144 TEST_HEADER include/spdk/gpt_spec.h 00:02:32.144 TEST_HEADER include/spdk/file.h 00:02:32.144 TEST_HEADER include/spdk/hexlify.h 00:02:32.144 TEST_HEADER include/spdk/histogram_data.h 00:02:32.144 TEST_HEADER include/spdk/idxd.h 00:02:32.144 TEST_HEADER include/spdk/idxd_spec.h 00:02:32.144 TEST_HEADER include/spdk/init.h 00:02:32.144 CC app/spdk_nvme_discover/discovery_aer.o 00:02:32.144 TEST_HEADER include/spdk/ftl.h 00:02:32.144 TEST_HEADER include/spdk/ioat.h 00:02:32.144 TEST_HEADER include/spdk/json.h 00:02:32.144 CC test/rpc_client/rpc_client_test.o 00:02:32.144 TEST_HEADER include/spdk/env.h 00:02:32.144 TEST_HEADER include/spdk/iscsi_spec.h 00:02:32.144 TEST_HEADER include/spdk/jsonrpc.h 00:02:32.144 TEST_HEADER include/spdk/log.h 00:02:32.144 TEST_HEADER include/spdk/lvol.h 00:02:32.144 TEST_HEADER include/spdk/memory.h 00:02:32.144 TEST_HEADER include/spdk/nvme_intel.h 00:02:32.144 TEST_HEADER include/spdk/nvme.h 00:02:32.144 TEST_HEADER include/spdk/notify.h 00:02:32.144 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:32.144 TEST_HEADER include/spdk/mmio.h 00:02:32.144 TEST_HEADER include/spdk/nvme_zns.h 00:02:32.144 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:32.144 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:32.144 TEST_HEADER include/spdk/nvme_spec.h 00:02:32.144 TEST_HEADER include/spdk/nbd.h 00:02:32.144 TEST_HEADER include/spdk/ioat_spec.h 00:02:32.144 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:32.144 TEST_HEADER include/spdk/nvmf.h 00:02:32.144 TEST_HEADER include/spdk/nvmf_spec.h 00:02:32.144 TEST_HEADER include/spdk/likely.h 00:02:32.144 TEST_HEADER include/spdk/pci_ids.h 00:02:32.144 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:32.144 TEST_HEADER include/spdk/opal.h 00:02:32.144 TEST_HEADER include/spdk/pipe.h 00:02:32.144 TEST_HEADER include/spdk/queue.h 00:02:32.144 TEST_HEADER include/spdk/reduce.h 00:02:32.144 TEST_HEADER include/spdk/rpc.h 00:02:32.144 TEST_HEADER include/spdk/scsi.h 00:02:32.144 TEST_HEADER include/spdk/scsi_spec.h 00:02:32.144 CC app/spdk_dd/spdk_dd.o 00:02:32.144 TEST_HEADER include/spdk/nvmf_transport.h 00:02:32.144 TEST_HEADER include/spdk/stdinc.h 00:02:32.144 TEST_HEADER include/spdk/sock.h 00:02:32.144 TEST_HEADER include/spdk/thread.h 00:02:32.144 TEST_HEADER include/spdk/string.h 00:02:32.144 TEST_HEADER include/spdk/trace.h 00:02:32.144 CC app/nvmf_tgt/nvmf_main.o 00:02:32.144 TEST_HEADER include/spdk/opal_spec.h 00:02:32.144 TEST_HEADER include/spdk/scheduler.h 00:02:32.144 CC app/iscsi_tgt/iscsi_tgt.o 00:02:32.144 TEST_HEADER include/spdk/tree.h 00:02:32.144 TEST_HEADER include/spdk/ublk.h 00:02:32.144 TEST_HEADER include/spdk/util.h 00:02:32.144 TEST_HEADER include/spdk/version.h 00:02:32.144 TEST_HEADER include/spdk/uuid.h 00:02:32.144 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:32.144 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:32.144 TEST_HEADER include/spdk/vmd.h 00:02:32.144 TEST_HEADER include/spdk/vhost.h 00:02:32.144 TEST_HEADER include/spdk/trace_parser.h 00:02:32.144 TEST_HEADER include/spdk/xor.h 00:02:32.144 TEST_HEADER include/spdk/zipf.h 00:02:32.144 CXX test/cpp_headers/accel.o 00:02:32.144 CXX test/cpp_headers/accel_module.o 00:02:32.144 CXX test/cpp_headers/base64.o 00:02:32.144 CXX test/cpp_headers/barrier.o 00:02:32.144 CC examples/ioat/perf/perf.o 00:02:32.144 CXX test/cpp_headers/bdev.o 00:02:32.144 CC app/spdk_tgt/spdk_tgt.o 00:02:32.144 CXX test/cpp_headers/bdev_module.o 00:02:32.144 CXX test/cpp_headers/bdev_zone.o 00:02:32.144 CXX test/cpp_headers/bit_pool.o 00:02:32.144 CXX test/cpp_headers/assert.o 00:02:32.144 CXX test/cpp_headers/bit_array.o 00:02:32.144 CXX test/cpp_headers/blobfs_bdev.o 00:02:32.144 CXX test/cpp_headers/blobfs.o 00:02:32.144 CXX test/cpp_headers/config.o 00:02:32.144 CXX test/cpp_headers/crc16.o 00:02:32.144 CXX test/cpp_headers/crc32.o 00:02:32.144 CXX test/cpp_headers/dma.o 00:02:32.144 CXX test/cpp_headers/dif.o 00:02:32.144 CXX test/cpp_headers/crc64.o 00:02:32.144 CXX test/cpp_headers/conf.o 00:02:32.144 CXX test/cpp_headers/blob.o 00:02:32.144 CXX test/cpp_headers/env_dpdk.o 00:02:32.144 CXX test/cpp_headers/cpuset.o 00:02:32.144 CXX test/cpp_headers/blob_bdev.o 00:02:32.144 CXX test/cpp_headers/endian.o 00:02:32.144 CXX test/cpp_headers/ftl.o 00:02:32.144 CXX test/cpp_headers/env.o 00:02:32.144 CC app/vhost/vhost.o 00:02:32.144 CXX test/cpp_headers/event.o 00:02:32.144 CXX test/cpp_headers/hexlify.o 00:02:32.144 CXX test/cpp_headers/fd.o 00:02:32.144 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:32.144 CXX test/cpp_headers/fd_group.o 00:02:32.144 CXX test/cpp_headers/idxd.o 00:02:32.144 CC test/nvme/reserve/reserve.o 00:02:32.144 CC examples/sock/hello_world/hello_sock.o 00:02:32.144 CXX test/cpp_headers/file.o 00:02:32.144 CXX test/cpp_headers/gpt_spec.o 00:02:32.144 CXX test/cpp_headers/init.o 00:02:32.144 CC examples/ioat/verify/verify.o 00:02:32.144 CXX test/cpp_headers/ioat_spec.o 00:02:32.144 CC examples/vmd/led/led.o 00:02:32.144 CXX test/cpp_headers/histogram_data.o 00:02:32.144 CXX test/cpp_headers/iscsi_spec.o 00:02:32.144 CXX test/cpp_headers/json.o 00:02:32.144 CXX test/cpp_headers/ioat.o 00:02:32.144 CC test/nvme/overhead/overhead.o 00:02:32.144 CXX test/cpp_headers/idxd_spec.o 00:02:32.144 CXX test/cpp_headers/jsonrpc.o 00:02:32.144 CXX test/cpp_headers/likely.o 00:02:32.144 CC examples/nvme/abort/abort.o 00:02:32.144 CC examples/vmd/lsvmd/lsvmd.o 00:02:32.144 CC test/nvme/err_injection/err_injection.o 00:02:32.144 CC test/event/app_repeat/app_repeat.o 00:02:32.144 CXX test/cpp_headers/mmio.o 00:02:32.144 CXX test/cpp_headers/nbd.o 00:02:32.144 CXX test/cpp_headers/log.o 00:02:32.144 CXX test/cpp_headers/nvme.o 00:02:32.144 CXX test/cpp_headers/nvme_intel.o 00:02:32.144 CXX test/cpp_headers/lvol.o 00:02:32.144 CXX test/cpp_headers/memory.o 00:02:32.144 CC test/accel/dif/dif.o 00:02:32.144 CXX test/cpp_headers/nvme_zns.o 00:02:32.144 CXX test/cpp_headers/notify.o 00:02:32.144 CC examples/bdev/hello_world/hello_bdev.o 00:02:32.144 CXX test/cpp_headers/nvme_ocssd.o 00:02:32.144 CC test/env/vtophys/vtophys.o 00:02:32.144 CC test/env/pci/pci_ut.o 00:02:32.144 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:32.144 CXX test/cpp_headers/nvme_spec.o 00:02:32.144 CC app/fio/nvme/fio_plugin.o 00:02:32.144 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:32.144 CXX test/cpp_headers/nvmf_cmd.o 00:02:32.144 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:32.144 CC examples/nvmf/nvmf/nvmf.o 00:02:32.144 CXX test/cpp_headers/opal_spec.o 00:02:32.144 CXX test/cpp_headers/nvmf.o 00:02:32.144 CXX test/cpp_headers/nvmf_spec.o 00:02:32.144 CXX test/cpp_headers/opal.o 00:02:32.144 CXX test/cpp_headers/pci_ids.o 00:02:32.144 CXX test/cpp_headers/nvmf_transport.o 00:02:32.144 CC test/app/stub/stub.o 00:02:32.144 CC test/event/event_perf/event_perf.o 00:02:32.144 CXX test/cpp_headers/pipe.o 00:02:32.144 CC examples/thread/thread/thread_ex.o 00:02:32.144 CC test/app/histogram_perf/histogram_perf.o 00:02:32.144 CC test/env/memory/memory_ut.o 00:02:32.144 CXX test/cpp_headers/rpc.o 00:02:32.144 CXX test/cpp_headers/scheduler.o 00:02:32.144 CXX test/cpp_headers/scsi.o 00:02:32.144 CC test/nvme/connect_stress/connect_stress.o 00:02:32.144 CC test/thread/poller_perf/poller_perf.o 00:02:32.144 CXX test/cpp_headers/queue.o 00:02:32.144 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:32.144 CXX test/cpp_headers/reduce.o 00:02:32.144 CC test/blobfs/mkfs/mkfs.o 00:02:32.144 CC examples/idxd/perf/perf.o 00:02:32.144 CC test/app/bdev_svc/bdev_svc.o 00:02:32.144 CC test/nvme/fdp/fdp.o 00:02:32.144 CC examples/accel/perf/accel_perf.o 00:02:32.144 CC examples/blob/hello_world/hello_blob.o 00:02:32.416 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:32.416 CC examples/nvme/hotplug/hotplug.o 00:02:32.416 CC test/bdev/bdevio/bdevio.o 00:02:32.416 CXX test/cpp_headers/scsi_spec.o 00:02:32.416 CC test/nvme/compliance/nvme_compliance.o 00:02:32.416 CC test/nvme/sgl/sgl.o 00:02:32.416 CC test/nvme/reset/reset.o 00:02:32.416 CC test/event/scheduler/scheduler.o 00:02:32.416 CC examples/bdev/bdevperf/bdevperf.o 00:02:32.416 CC test/nvme/boot_partition/boot_partition.o 00:02:32.416 CC test/nvme/e2edp/nvme_dp.o 00:02:32.416 CC test/nvme/startup/startup.o 00:02:32.416 CC examples/nvme/arbitration/arbitration.o 00:02:32.416 CC test/nvme/aer/aer.o 00:02:32.416 CC test/env/mem_callbacks/mem_callbacks.o 00:02:32.416 CC test/app/jsoncat/jsoncat.o 00:02:32.416 CC examples/util/zipf/zipf.o 00:02:32.416 LINK spdk_trace_record 00:02:32.416 CC examples/nvme/hello_world/hello_world.o 00:02:32.416 CC test/nvme/fused_ordering/fused_ordering.o 00:02:32.416 CC test/nvme/cuse/cuse.o 00:02:32.416 CC test/event/reactor/reactor.o 00:02:32.416 CC test/lvol/esnap/esnap.o 00:02:32.416 CXX test/cpp_headers/sock.o 00:02:32.416 LINK led 00:02:32.417 CC examples/nvme/reconnect/reconnect.o 00:02:32.417 CC test/event/reactor_perf/reactor_perf.o 00:02:32.417 LINK rpc_client_test 00:02:32.417 LINK ioat_perf 00:02:32.417 LINK cmb_copy 00:02:32.417 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:32.688 CC app/fio/bdev/fio_plugin.o 00:02:32.688 CC test/nvme/simple_copy/simple_copy.o 00:02:32.688 CC test/dma/test_dma/test_dma.o 00:02:32.688 CXX test/cpp_headers/string.o 00:02:32.688 CC examples/blob/cli/blobcli.o 00:02:32.688 CXX test/cpp_headers/thread.o 00:02:32.688 CXX test/cpp_headers/stdinc.o 00:02:32.688 LINK err_injection 00:02:32.688 CXX test/cpp_headers/trace.o 00:02:32.688 LINK interrupt_tgt 00:02:32.688 LINK reserve 00:02:32.688 CXX test/cpp_headers/trace_parser.o 00:02:32.688 LINK nvmf_tgt 00:02:32.688 CXX test/cpp_headers/tree.o 00:02:32.688 CXX test/cpp_headers/ublk.o 00:02:32.688 CXX test/cpp_headers/util.o 00:02:32.688 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:32.688 CXX test/cpp_headers/version.o 00:02:32.688 CXX test/cpp_headers/vfio_user_pci.o 00:02:32.688 CXX test/cpp_headers/vhost.o 00:02:32.688 CXX test/cpp_headers/vmd.o 00:02:32.688 CXX test/cpp_headers/uuid.o 00:02:32.688 LINK spdk_tgt 00:02:32.688 CXX test/cpp_headers/vfio_user_spec.o 00:02:32.688 CXX test/cpp_headers/xor.o 00:02:32.688 LINK stub 00:02:32.688 CXX test/cpp_headers/zipf.o 00:02:32.688 LINK env_dpdk_post_init 00:02:32.688 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:32.688 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:32.688 LINK mkfs 00:02:32.688 LINK iscsi_tgt 00:02:32.688 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:32.688 LINK app_repeat 00:02:32.688 LINK spdk_dd 00:02:32.688 LINK hello_sock 00:02:32.688 LINK connect_stress 00:02:32.688 LINK overhead 00:02:32.688 LINK nvmf 00:02:32.688 LINK scheduler 00:02:32.688 LINK fused_ordering 00:02:32.688 LINK sgl 00:02:32.949 LINK hotplug 00:02:32.949 LINK spdk_lspci 00:02:32.949 LINK bdevio 00:02:32.949 LINK thread 00:02:32.949 LINK reset 00:02:32.949 LINK fdp 00:02:32.949 LINK pci_ut 00:02:32.949 LINK startup 00:02:32.949 LINK simple_copy 00:02:32.949 LINK event_perf 00:02:32.949 LINK jsoncat 00:02:32.949 LINK lsvmd 00:02:32.949 LINK boot_partition 00:02:32.949 LINK reactor 00:02:32.949 LINK doorbell_aers 00:02:32.949 LINK pmr_persistence 00:02:32.949 LINK vtophys 00:02:32.949 LINK histogram_perf 00:02:32.949 LINK spdk_nvme_discover 00:02:32.949 LINK vhost 00:02:32.949 LINK poller_perf 00:02:32.949 LINK bdev_svc 00:02:32.949 LINK verify 00:02:33.208 LINK nvme_fuzz 00:02:33.208 LINK spdk_nvme_identify 00:02:33.208 LINK reactor_perf 00:02:33.208 LINK zipf 00:02:33.208 LINK spdk_top 00:02:33.208 LINK hello_world 00:02:33.208 LINK test_dma 00:02:33.208 LINK hello_bdev 00:02:33.208 LINK idxd_perf 00:02:33.208 LINK arbitration 00:02:33.208 LINK nvme_dp 00:02:33.208 LINK hello_blob 00:02:33.208 LINK vhost_fuzz 00:02:33.208 LINK aer 00:02:33.208 LINK dif 00:02:33.208 LINK reconnect 00:02:33.208 LINK nvme_compliance 00:02:33.208 LINK mem_callbacks 00:02:33.469 LINK abort 00:02:33.469 LINK nvme_manage 00:02:33.469 LINK blobcli 00:02:33.469 LINK memory_ut 00:02:33.469 LINK spdk_nvme 00:02:33.469 LINK spdk_trace 00:02:33.469 LINK accel_perf 00:02:33.469 LINK spdk_bdev 00:02:33.469 LINK cuse 00:02:33.730 LINK spdk_nvme_perf 00:02:33.730 LINK bdevperf 00:02:34.301 LINK iscsi_fuzz 00:02:36.216 LINK esnap 00:02:36.476 00:02:36.476 real 0m45.438s 00:02:36.476 user 6m6.504s 00:02:36.476 sys 3m56.719s 00:02:36.476 22:45:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:36.476 22:45:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.476 ************************************ 00:02:36.476 END TEST make 00:02:36.476 ************************************ 00:02:36.737 22:45:04 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:36.737 22:45:04 -- nvmf/common.sh@7 -- # uname -s 00:02:36.737 22:45:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:36.737 22:45:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:36.737 22:45:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:36.737 22:45:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:36.737 22:45:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:36.737 22:45:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:36.737 22:45:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:36.737 22:45:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:36.737 22:45:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:36.737 22:45:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:36.737 22:45:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:36.737 22:45:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:36.737 22:45:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:36.737 22:45:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:36.737 22:45:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:36.737 22:45:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:36.737 22:45:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:36.737 22:45:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:36.737 22:45:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:36.737 22:45:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.737 22:45:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.737 22:45:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.738 22:45:04 -- paths/export.sh@5 -- # export PATH 00:02:36.738 22:45:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.738 22:45:04 -- nvmf/common.sh@46 -- # : 0 00:02:36.738 22:45:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:36.738 22:45:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:36.738 22:45:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:36.738 22:45:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:36.738 22:45:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:36.738 22:45:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:36.738 22:45:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:36.738 22:45:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:36.738 22:45:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:36.738 22:45:04 -- spdk/autotest.sh@32 -- # uname -s 00:02:36.738 22:45:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:36.738 22:45:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:36.738 22:45:04 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:36.738 22:45:04 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:36.738 22:45:04 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:36.738 22:45:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:36.738 22:45:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:36.738 22:45:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:36.738 22:45:04 -- spdk/autotest.sh@48 -- # udevadm_pid=3820772 00:02:36.738 22:45:04 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:36.738 22:45:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:36.738 22:45:04 -- spdk/autotest.sh@54 -- # echo 3820774 00:02:36.738 22:45:04 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:36.738 22:45:04 -- spdk/autotest.sh@56 -- # echo 3820775 00:02:36.738 22:45:04 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:36.738 22:45:04 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:36.738 22:45:04 -- spdk/autotest.sh@60 -- # echo 3820776 00:02:36.738 22:45:04 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:36.738 22:45:04 -- spdk/autotest.sh@62 -- # echo 3820777 00:02:36.738 22:45:04 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:36.738 22:45:04 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:36.738 22:45:04 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:36.738 22:45:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:36.738 22:45:04 -- common/autotest_common.sh@10 -- # set +x 00:02:36.738 22:45:04 -- spdk/autotest.sh@70 -- # create_test_list 00:02:36.738 22:45:04 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:36.738 22:45:04 -- common/autotest_common.sh@10 -- # set +x 00:02:36.738 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:36.738 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:36.738 22:45:04 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:36.738 22:45:04 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.738 22:45:04 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.738 22:45:04 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:36.738 22:45:04 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.738 22:45:04 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:36.738 22:45:04 -- common/autotest_common.sh@1440 -- # uname 00:02:36.738 22:45:04 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:36.738 22:45:04 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:36.738 22:45:04 -- common/autotest_common.sh@1460 -- # uname 00:02:36.738 22:45:04 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:36.738 22:45:04 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:36.738 22:45:04 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:36.738 22:45:04 -- spdk/autotest.sh@83 -- # hash lcov 00:02:36.738 22:45:04 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:36.738 22:45:04 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:36.738 --rc lcov_branch_coverage=1 00:02:36.738 --rc lcov_function_coverage=1 00:02:36.738 --rc genhtml_branch_coverage=1 00:02:36.738 --rc genhtml_function_coverage=1 00:02:36.738 --rc genhtml_legend=1 00:02:36.738 --rc geninfo_all_blocks=1 00:02:36.738 ' 00:02:36.738 22:45:04 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:36.738 --rc lcov_branch_coverage=1 00:02:36.738 --rc lcov_function_coverage=1 00:02:36.738 --rc genhtml_branch_coverage=1 00:02:36.738 --rc genhtml_function_coverage=1 00:02:36.738 --rc genhtml_legend=1 00:02:36.738 --rc geninfo_all_blocks=1 00:02:36.738 ' 00:02:36.738 22:45:04 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:36.738 --rc lcov_branch_coverage=1 00:02:36.738 --rc lcov_function_coverage=1 00:02:36.738 --rc genhtml_branch_coverage=1 00:02:36.738 --rc genhtml_function_coverage=1 00:02:36.738 --rc genhtml_legend=1 00:02:36.738 --rc geninfo_all_blocks=1 00:02:36.738 --no-external' 00:02:36.738 22:45:04 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:36.738 --rc lcov_branch_coverage=1 00:02:36.738 --rc lcov_function_coverage=1 00:02:36.738 --rc genhtml_branch_coverage=1 00:02:36.738 --rc genhtml_function_coverage=1 00:02:36.738 --rc genhtml_legend=1 00:02:36.738 --rc geninfo_all_blocks=1 00:02:36.738 --no-external' 00:02:36.738 22:45:04 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:37.031 lcov: LCOV version 1.14 00:02:37.031 22:45:04 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:49.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:49.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:49.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:49.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:49.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:49.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:01.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:01.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:01.774 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:01.775 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:02.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:02.037 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:02.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:02.299 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:02.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:02.299 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:02.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:02.299 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:03.686 22:45:31 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:03.686 22:45:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:03.686 22:45:31 -- common/autotest_common.sh@10 -- # set +x 00:03:03.686 22:45:31 -- spdk/autotest.sh@102 -- # rm -f 00:03:03.686 22:45:31 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.895 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:07.895 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:07.895 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:07.895 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:07.895 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:07.895 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:07.895 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:07.895 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:07.895 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:07.895 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:07.895 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:07.895 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:07.895 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:07.895 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:07.895 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:07.895 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:07.895 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:07.895 22:45:35 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:07.895 22:45:35 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:07.895 22:45:35 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:07.895 22:45:35 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:07.895 22:45:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:07.895 22:45:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:07.895 22:45:35 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:07.895 22:45:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:07.895 22:45:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:07.895 22:45:35 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:07.895 22:45:35 -- spdk/autotest.sh@121 -- # grep -v p 00:03:07.895 22:45:35 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:07.895 22:45:35 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:07.895 22:45:35 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:07.895 22:45:35 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:07.895 22:45:35 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:07.895 22:45:35 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:07.895 No valid GPT data, bailing 00:03:07.895 22:45:35 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:07.895 22:45:35 -- scripts/common.sh@393 -- # pt= 00:03:07.895 22:45:35 -- scripts/common.sh@394 -- # return 1 00:03:07.895 22:45:35 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:07.895 1+0 records in 00:03:07.895 1+0 records out 00:03:07.895 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00144973 s, 723 MB/s 00:03:07.895 22:45:35 -- spdk/autotest.sh@129 -- # sync 00:03:07.895 22:45:35 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:07.895 22:45:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:07.895 22:45:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:16.044 22:45:42 -- spdk/autotest.sh@135 -- # uname -s 00:03:16.044 22:45:42 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:16.044 22:45:42 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:16.044 22:45:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:16.044 22:45:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:16.044 22:45:42 -- common/autotest_common.sh@10 -- # set +x 00:03:16.044 ************************************ 00:03:16.044 START TEST setup.sh 00:03:16.044 ************************************ 00:03:16.044 22:45:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:16.044 * Looking for test storage... 00:03:16.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:16.044 22:45:42 -- setup/test-setup.sh@10 -- # uname -s 00:03:16.044 22:45:42 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:16.044 22:45:42 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:16.044 22:45:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:16.044 22:45:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:16.044 22:45:42 -- common/autotest_common.sh@10 -- # set +x 00:03:16.044 ************************************ 00:03:16.044 START TEST acl 00:03:16.044 ************************************ 00:03:16.044 22:45:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:16.044 * Looking for test storage... 00:03:16.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:16.044 22:45:42 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:16.044 22:45:42 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:16.044 22:45:42 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:16.044 22:45:42 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:16.044 22:45:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:16.044 22:45:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:16.044 22:45:42 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:16.044 22:45:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:16.044 22:45:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:16.044 22:45:42 -- setup/acl.sh@12 -- # devs=() 00:03:16.044 22:45:42 -- setup/acl.sh@12 -- # declare -a devs 00:03:16.044 22:45:42 -- setup/acl.sh@13 -- # drivers=() 00:03:16.044 22:45:42 -- setup/acl.sh@13 -- # declare -A drivers 00:03:16.044 22:45:42 -- setup/acl.sh@51 -- # setup reset 00:03:16.044 22:45:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.044 22:45:42 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.418 22:45:46 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:19.418 22:45:46 -- setup/acl.sh@16 -- # local dev driver 00:03:19.418 22:45:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:19.418 22:45:46 -- setup/acl.sh@15 -- # setup output status 00:03:19.418 22:45:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.418 22:45:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:21.967 Hugepages 00:03:21.967 node hugesize free / total 00:03:21.967 22:45:50 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:21.967 22:45:50 -- setup/acl.sh@19 -- # continue 00:03:21.967 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.967 22:45:50 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:21.967 22:45:50 -- setup/acl.sh@19 -- # continue 00:03:21.967 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.967 22:45:50 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:21.967 22:45:50 -- setup/acl.sh@19 -- # continue 00:03:21.967 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.967 00:03:21.967 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:21.967 22:45:50 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:21.967 22:45:50 -- setup/acl.sh@19 -- # continue 00:03:21.967 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.967 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:21.967 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.967 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:21.967 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.967 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:21.967 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.967 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:21.967 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.967 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:21.967 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.967 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:21.967 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.967 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:21.967 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.967 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:21.967 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.967 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:21.967 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.967 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:21.967 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.967 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:21.968 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.968 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:21.968 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.968 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:21.968 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.968 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:21.968 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:21.968 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:21.968 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:21.968 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:21.968 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.229 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:22.229 22:45:50 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:22.230 22:45:50 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:22.230 22:45:50 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:22.230 22:45:50 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:22.230 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.230 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:22.230 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.230 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:22.230 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.230 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:22.230 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.230 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:22.230 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.230 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:22.230 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.230 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:22.230 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.230 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:22.230 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.230 22:45:50 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:22.230 22:45:50 -- setup/acl.sh@20 -- # continue 00:03:22.230 22:45:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.230 22:45:50 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:22.230 22:45:50 -- setup/acl.sh@54 -- # run_test denied denied 00:03:22.230 22:45:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:22.230 22:45:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:22.230 22:45:50 -- common/autotest_common.sh@10 -- # set +x 00:03:22.230 ************************************ 00:03:22.230 START TEST denied 00:03:22.230 ************************************ 00:03:22.230 22:45:50 -- common/autotest_common.sh@1104 -- # denied 00:03:22.230 22:45:50 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:22.230 22:45:50 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:22.230 22:45:50 -- setup/acl.sh@38 -- # setup output config 00:03:22.230 22:45:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.230 22:45:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:26.443 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:26.443 22:45:54 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:26.443 22:45:54 -- setup/acl.sh@28 -- # local dev driver 00:03:26.443 22:45:54 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:26.443 22:45:54 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:26.443 22:45:54 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:26.443 22:45:54 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:26.443 22:45:54 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:26.443 22:45:54 -- setup/acl.sh@41 -- # setup reset 00:03:26.443 22:45:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.443 22:45:54 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.663 00:03:30.663 real 0m8.578s 00:03:30.663 user 0m2.974s 00:03:30.663 sys 0m4.906s 00:03:30.663 22:45:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.663 22:45:58 -- common/autotest_common.sh@10 -- # set +x 00:03:30.663 ************************************ 00:03:30.663 END TEST denied 00:03:30.663 ************************************ 00:03:30.924 22:45:58 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:30.924 22:45:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.924 22:45:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.924 22:45:58 -- common/autotest_common.sh@10 -- # set +x 00:03:30.924 ************************************ 00:03:30.924 START TEST allowed 00:03:30.924 ************************************ 00:03:30.924 22:45:58 -- common/autotest_common.sh@1104 -- # allowed 00:03:30.924 22:45:58 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:30.924 22:45:58 -- setup/acl.sh@45 -- # setup output config 00:03:30.924 22:45:58 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:30.924 22:45:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.924 22:45:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.213 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:36.213 22:46:04 -- setup/acl.sh@47 -- # verify 00:03:36.213 22:46:04 -- setup/acl.sh@28 -- # local dev driver 00:03:36.214 22:46:04 -- setup/acl.sh@48 -- # setup reset 00:03:36.214 22:46:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.214 22:46:04 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.427 00:03:40.427 real 0m9.474s 00:03:40.427 user 0m2.780s 00:03:40.427 sys 0m4.986s 00:03:40.427 22:46:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.427 22:46:08 -- common/autotest_common.sh@10 -- # set +x 00:03:40.427 ************************************ 00:03:40.427 END TEST allowed 00:03:40.427 ************************************ 00:03:40.427 00:03:40.427 real 0m25.462s 00:03:40.427 user 0m8.531s 00:03:40.427 sys 0m14.722s 00:03:40.427 22:46:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.427 22:46:08 -- common/autotest_common.sh@10 -- # set +x 00:03:40.427 ************************************ 00:03:40.427 END TEST acl 00:03:40.427 ************************************ 00:03:40.427 22:46:08 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:40.427 22:46:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:40.427 22:46:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:40.427 22:46:08 -- common/autotest_common.sh@10 -- # set +x 00:03:40.427 ************************************ 00:03:40.427 START TEST hugepages 00:03:40.427 ************************************ 00:03:40.427 22:46:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:40.427 * Looking for test storage... 00:03:40.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:40.427 22:46:08 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:40.427 22:46:08 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:40.427 22:46:08 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:40.427 22:46:08 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:40.427 22:46:08 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:40.427 22:46:08 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:40.427 22:46:08 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:40.427 22:46:08 -- setup/common.sh@18 -- # local node= 00:03:40.427 22:46:08 -- setup/common.sh@19 -- # local var val 00:03:40.427 22:46:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.427 22:46:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.427 22:46:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.427 22:46:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.427 22:46:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.427 22:46:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.427 22:46:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 103143700 kB' 'MemAvailable: 106408384 kB' 'Buffers: 2704 kB' 'Cached: 14336040 kB' 'SwapCached: 0 kB' 'Active: 11380828 kB' 'Inactive: 3514596 kB' 'Active(anon): 10968800 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560128 kB' 'Mapped: 209984 kB' 'Shmem: 10412120 kB' 'KReclaimable: 324784 kB' 'Slab: 1195932 kB' 'SReclaimable: 324784 kB' 'SUnreclaim: 871148 kB' 'KernelStack: 27184 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 12454624 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235364 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.427 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.427 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # continue 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.428 22:46:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.428 22:46:08 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:40.428 22:46:08 -- setup/common.sh@33 -- # echo 2048 00:03:40.428 22:46:08 -- setup/common.sh@33 -- # return 0 00:03:40.428 22:46:08 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:40.428 22:46:08 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:40.428 22:46:08 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:40.428 22:46:08 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:40.428 22:46:08 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:40.428 22:46:08 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:40.428 22:46:08 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:40.428 22:46:08 -- setup/hugepages.sh@207 -- # get_nodes 00:03:40.428 22:46:08 -- setup/hugepages.sh@27 -- # local node 00:03:40.428 22:46:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.428 22:46:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:40.428 22:46:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.428 22:46:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:40.428 22:46:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.428 22:46:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.428 22:46:08 -- setup/hugepages.sh@208 -- # clear_hp 00:03:40.428 22:46:08 -- setup/hugepages.sh@37 -- # local node hp 00:03:40.428 22:46:08 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.428 22:46:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.428 22:46:08 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.428 22:46:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.428 22:46:08 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.428 22:46:08 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.428 22:46:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.428 22:46:08 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.428 22:46:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.428 22:46:08 -- setup/hugepages.sh@41 -- # echo 0 00:03:40.428 22:46:08 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:40.428 22:46:08 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:40.428 22:46:08 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:40.428 22:46:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:40.428 22:46:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:40.428 22:46:08 -- common/autotest_common.sh@10 -- # set +x 00:03:40.428 ************************************ 00:03:40.428 START TEST default_setup 00:03:40.429 ************************************ 00:03:40.429 22:46:08 -- common/autotest_common.sh@1104 -- # default_setup 00:03:40.429 22:46:08 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:40.429 22:46:08 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.429 22:46:08 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:40.429 22:46:08 -- setup/hugepages.sh@51 -- # shift 00:03:40.429 22:46:08 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:40.429 22:46:08 -- setup/hugepages.sh@52 -- # local node_ids 00:03:40.429 22:46:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.429 22:46:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.429 22:46:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:40.429 22:46:08 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:40.429 22:46:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.429 22:46:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.429 22:46:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.429 22:46:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.429 22:46:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.429 22:46:08 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:40.429 22:46:08 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.429 22:46:08 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:40.429 22:46:08 -- setup/hugepages.sh@73 -- # return 0 00:03:40.429 22:46:08 -- setup/hugepages.sh@137 -- # setup output 00:03:40.429 22:46:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.429 22:46:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.735 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:43.735 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:43.735 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:43.735 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:43.735 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:43.996 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:43.996 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:43.996 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:43.996 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:43.996 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:43.996 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:43.996 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:43.996 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:43.996 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:43.996 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:43.996 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:43.996 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:44.288 22:46:12 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:44.288 22:46:12 -- setup/hugepages.sh@89 -- # local node 00:03:44.288 22:46:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.288 22:46:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.288 22:46:12 -- setup/hugepages.sh@92 -- # local surp 00:03:44.288 22:46:12 -- setup/hugepages.sh@93 -- # local resv 00:03:44.288 22:46:12 -- setup/hugepages.sh@94 -- # local anon 00:03:44.288 22:46:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.288 22:46:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.288 22:46:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.288 22:46:12 -- setup/common.sh@18 -- # local node= 00:03:44.288 22:46:12 -- setup/common.sh@19 -- # local var val 00:03:44.288 22:46:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.288 22:46:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.288 22:46:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.288 22:46:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.288 22:46:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.288 22:46:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.288 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.288 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105320700 kB' 'MemAvailable: 108585368 kB' 'Buffers: 2704 kB' 'Cached: 14336180 kB' 'SwapCached: 0 kB' 'Active: 11395956 kB' 'Inactive: 3514596 kB' 'Active(anon): 10983928 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574644 kB' 'Mapped: 210320 kB' 'Shmem: 10412260 kB' 'KReclaimable: 324752 kB' 'Slab: 1194064 kB' 'SReclaimable: 324752 kB' 'SUnreclaim: 869312 kB' 'KernelStack: 27184 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12471616 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235140 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.289 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.289 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.290 22:46:12 -- setup/common.sh@33 -- # echo 0 00:03:44.290 22:46:12 -- setup/common.sh@33 -- # return 0 00:03:44.290 22:46:12 -- setup/hugepages.sh@97 -- # anon=0 00:03:44.290 22:46:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.290 22:46:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.290 22:46:12 -- setup/common.sh@18 -- # local node= 00:03:44.290 22:46:12 -- setup/common.sh@19 -- # local var val 00:03:44.290 22:46:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.290 22:46:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.290 22:46:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.290 22:46:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.290 22:46:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.290 22:46:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105320160 kB' 'MemAvailable: 108584828 kB' 'Buffers: 2704 kB' 'Cached: 14336184 kB' 'SwapCached: 0 kB' 'Active: 11395692 kB' 'Inactive: 3514596 kB' 'Active(anon): 10983664 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574316 kB' 'Mapped: 210304 kB' 'Shmem: 10412264 kB' 'KReclaimable: 324752 kB' 'Slab: 1194024 kB' 'SReclaimable: 324752 kB' 'SUnreclaim: 869272 kB' 'KernelStack: 27152 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12471628 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235140 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.290 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.290 22:46:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.291 22:46:12 -- setup/common.sh@33 -- # echo 0 00:03:44.291 22:46:12 -- setup/common.sh@33 -- # return 0 00:03:44.291 22:46:12 -- setup/hugepages.sh@99 -- # surp=0 00:03:44.291 22:46:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.291 22:46:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.291 22:46:12 -- setup/common.sh@18 -- # local node= 00:03:44.291 22:46:12 -- setup/common.sh@19 -- # local var val 00:03:44.291 22:46:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.291 22:46:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.291 22:46:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.291 22:46:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.291 22:46:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.291 22:46:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105320700 kB' 'MemAvailable: 108585368 kB' 'Buffers: 2704 kB' 'Cached: 14336196 kB' 'SwapCached: 0 kB' 'Active: 11395180 kB' 'Inactive: 3514596 kB' 'Active(anon): 10983152 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574296 kB' 'Mapped: 210228 kB' 'Shmem: 10412276 kB' 'KReclaimable: 324752 kB' 'Slab: 1193992 kB' 'SReclaimable: 324752 kB' 'SUnreclaim: 869240 kB' 'KernelStack: 27152 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12471644 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235140 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.291 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.291 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.292 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.292 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.556 22:46:12 -- setup/common.sh@33 -- # echo 0 00:03:44.556 22:46:12 -- setup/common.sh@33 -- # return 0 00:03:44.556 22:46:12 -- setup/hugepages.sh@100 -- # resv=0 00:03:44.556 22:46:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.556 nr_hugepages=1024 00:03:44.556 22:46:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.556 resv_hugepages=0 00:03:44.556 22:46:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.556 surplus_hugepages=0 00:03:44.556 22:46:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.556 anon_hugepages=0 00:03:44.556 22:46:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.556 22:46:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.556 22:46:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.556 22:46:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.556 22:46:12 -- setup/common.sh@18 -- # local node= 00:03:44.556 22:46:12 -- setup/common.sh@19 -- # local var val 00:03:44.556 22:46:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.556 22:46:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.556 22:46:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.556 22:46:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.556 22:46:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.556 22:46:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.556 22:46:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105319692 kB' 'MemAvailable: 108584360 kB' 'Buffers: 2704 kB' 'Cached: 14336208 kB' 'SwapCached: 0 kB' 'Active: 11395196 kB' 'Inactive: 3514596 kB' 'Active(anon): 10983168 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574296 kB' 'Mapped: 210228 kB' 'Shmem: 10412288 kB' 'KReclaimable: 324752 kB' 'Slab: 1193992 kB' 'SReclaimable: 324752 kB' 'SUnreclaim: 869240 kB' 'KernelStack: 27184 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12471676 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235140 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.556 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.556 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.557 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.557 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.558 22:46:12 -- setup/common.sh@33 -- # echo 1024 00:03:44.558 22:46:12 -- setup/common.sh@33 -- # return 0 00:03:44.558 22:46:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.558 22:46:12 -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.558 22:46:12 -- setup/hugepages.sh@27 -- # local node 00:03:44.558 22:46:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.558 22:46:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.558 22:46:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.558 22:46:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:44.558 22:46:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.558 22:46:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.558 22:46:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.558 22:46:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.558 22:46:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.558 22:46:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.558 22:46:12 -- setup/common.sh@18 -- # local node=0 00:03:44.558 22:46:12 -- setup/common.sh@19 -- # local var val 00:03:44.558 22:46:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.558 22:46:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.558 22:46:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.558 22:46:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.558 22:46:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.558 22:46:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 50404748 kB' 'MemUsed: 15254260 kB' 'SwapCached: 0 kB' 'Active: 7126616 kB' 'Inactive: 3324860 kB' 'Active(anon): 6977376 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3324860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10220636 kB' 'Mapped: 57560 kB' 'AnonPages: 234048 kB' 'Shmem: 6746536 kB' 'KernelStack: 13240 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 190672 kB' 'Slab: 721352 kB' 'SReclaimable: 190672 kB' 'SUnreclaim: 530680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.558 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.558 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # continue 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.559 22:46:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.559 22:46:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.559 22:46:12 -- setup/common.sh@33 -- # echo 0 00:03:44.559 22:46:12 -- setup/common.sh@33 -- # return 0 00:03:44.559 22:46:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.559 22:46:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.559 22:46:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.559 22:46:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.559 22:46:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.559 node0=1024 expecting 1024 00:03:44.559 22:46:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.559 00:03:44.559 real 0m3.957s 00:03:44.559 user 0m1.560s 00:03:44.559 sys 0m2.418s 00:03:44.559 22:46:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.559 22:46:12 -- common/autotest_common.sh@10 -- # set +x 00:03:44.559 ************************************ 00:03:44.559 END TEST default_setup 00:03:44.559 ************************************ 00:03:44.559 22:46:12 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:44.559 22:46:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:44.559 22:46:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:44.559 22:46:12 -- common/autotest_common.sh@10 -- # set +x 00:03:44.559 ************************************ 00:03:44.559 START TEST per_node_1G_alloc 00:03:44.559 ************************************ 00:03:44.559 22:46:12 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:44.559 22:46:12 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:44.559 22:46:12 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:44.559 22:46:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:44.559 22:46:12 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:44.559 22:46:12 -- setup/hugepages.sh@51 -- # shift 00:03:44.559 22:46:12 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:44.559 22:46:12 -- setup/hugepages.sh@52 -- # local node_ids 00:03:44.559 22:46:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.559 22:46:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:44.559 22:46:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:44.559 22:46:12 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:44.559 22:46:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.559 22:46:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:44.559 22:46:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.559 22:46:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.559 22:46:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.559 22:46:12 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:44.559 22:46:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.559 22:46:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:44.559 22:46:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.559 22:46:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:44.559 22:46:12 -- setup/hugepages.sh@73 -- # return 0 00:03:44.559 22:46:12 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:44.559 22:46:12 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:44.559 22:46:12 -- setup/hugepages.sh@146 -- # setup output 00:03:44.559 22:46:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.559 22:46:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.866 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:47.866 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:47.866 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:47.866 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:47.866 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:47.866 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:47.866 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:47.866 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:47.866 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:47.866 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:47.866 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:47.866 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:47.866 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:47.866 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:47.866 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:47.866 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:47.866 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:48.131 22:46:16 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:48.131 22:46:16 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:48.131 22:46:16 -- setup/hugepages.sh@89 -- # local node 00:03:48.131 22:46:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.131 22:46:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.131 22:46:16 -- setup/hugepages.sh@92 -- # local surp 00:03:48.131 22:46:16 -- setup/hugepages.sh@93 -- # local resv 00:03:48.131 22:46:16 -- setup/hugepages.sh@94 -- # local anon 00:03:48.131 22:46:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.131 22:46:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.131 22:46:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.131 22:46:16 -- setup/common.sh@18 -- # local node= 00:03:48.131 22:46:16 -- setup/common.sh@19 -- # local var val 00:03:48.131 22:46:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.131 22:46:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.131 22:46:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.132 22:46:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.132 22:46:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.132 22:46:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105392508 kB' 'MemAvailable: 108657176 kB' 'Buffers: 2704 kB' 'Cached: 14336324 kB' 'SwapCached: 0 kB' 'Active: 11396196 kB' 'Inactive: 3514596 kB' 'Active(anon): 10984168 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575180 kB' 'Mapped: 210128 kB' 'Shmem: 10412404 kB' 'KReclaimable: 324752 kB' 'Slab: 1194408 kB' 'SReclaimable: 324752 kB' 'SUnreclaim: 869656 kB' 'KernelStack: 27424 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12501460 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235572 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.132 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.132 22:46:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.133 22:46:16 -- setup/common.sh@33 -- # echo 0 00:03:48.133 22:46:16 -- setup/common.sh@33 -- # return 0 00:03:48.133 22:46:16 -- setup/hugepages.sh@97 -- # anon=0 00:03:48.133 22:46:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.133 22:46:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.133 22:46:16 -- setup/common.sh@18 -- # local node= 00:03:48.133 22:46:16 -- setup/common.sh@19 -- # local var val 00:03:48.133 22:46:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.133 22:46:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.133 22:46:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.133 22:46:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.133 22:46:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.133 22:46:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105394008 kB' 'MemAvailable: 108658676 kB' 'Buffers: 2704 kB' 'Cached: 14336324 kB' 'SwapCached: 0 kB' 'Active: 11396804 kB' 'Inactive: 3514596 kB' 'Active(anon): 10984776 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575792 kB' 'Mapped: 210136 kB' 'Shmem: 10412404 kB' 'KReclaimable: 324752 kB' 'Slab: 1194028 kB' 'SReclaimable: 324752 kB' 'SUnreclaim: 869276 kB' 'KernelStack: 27408 kB' 'PageTables: 9236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12501472 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.133 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.133 22:46:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.134 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.134 22:46:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.134 22:46:16 -- setup/common.sh@33 -- # echo 0 00:03:48.134 22:46:16 -- setup/common.sh@33 -- # return 0 00:03:48.134 22:46:16 -- setup/hugepages.sh@99 -- # surp=0 00:03:48.134 22:46:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.134 22:46:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.134 22:46:16 -- setup/common.sh@18 -- # local node= 00:03:48.134 22:46:16 -- setup/common.sh@19 -- # local var val 00:03:48.134 22:46:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.134 22:46:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.134 22:46:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.134 22:46:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.134 22:46:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.135 22:46:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105395392 kB' 'MemAvailable: 108660060 kB' 'Buffers: 2704 kB' 'Cached: 14336324 kB' 'SwapCached: 0 kB' 'Active: 11395748 kB' 'Inactive: 3514596 kB' 'Active(anon): 10983720 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574748 kB' 'Mapped: 210112 kB' 'Shmem: 10412404 kB' 'KReclaimable: 324752 kB' 'Slab: 1193996 kB' 'SReclaimable: 324752 kB' 'SUnreclaim: 869244 kB' 'KernelStack: 27280 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12499840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.135 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.135 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.400 22:46:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.400 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.400 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.400 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.400 22:46:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.400 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.400 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.400 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.400 22:46:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.400 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.400 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.400 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.400 22:46:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.400 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.400 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.400 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.400 22:46:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.401 22:46:16 -- setup/common.sh@33 -- # echo 0 00:03:48.401 22:46:16 -- setup/common.sh@33 -- # return 0 00:03:48.401 22:46:16 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.401 22:46:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.401 nr_hugepages=1024 00:03:48.401 22:46:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.401 resv_hugepages=0 00:03:48.401 22:46:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.401 surplus_hugepages=0 00:03:48.401 22:46:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.401 anon_hugepages=0 00:03:48.401 22:46:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.401 22:46:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.401 22:46:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.401 22:46:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.401 22:46:16 -- setup/common.sh@18 -- # local node= 00:03:48.401 22:46:16 -- setup/common.sh@19 -- # local var val 00:03:48.401 22:46:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.401 22:46:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.401 22:46:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.401 22:46:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.401 22:46:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.401 22:46:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105394544 kB' 'MemAvailable: 108659212 kB' 'Buffers: 2704 kB' 'Cached: 14336352 kB' 'SwapCached: 0 kB' 'Active: 11396536 kB' 'Inactive: 3514596 kB' 'Active(anon): 10984508 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575940 kB' 'Mapped: 210608 kB' 'Shmem: 10412432 kB' 'KReclaimable: 324752 kB' 'Slab: 1194036 kB' 'SReclaimable: 324752 kB' 'SUnreclaim: 869284 kB' 'KernelStack: 27408 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12503120 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.401 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.401 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.402 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.402 22:46:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.402 22:46:16 -- setup/common.sh@33 -- # echo 1024 00:03:48.402 22:46:16 -- setup/common.sh@33 -- # return 0 00:03:48.402 22:46:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.402 22:46:16 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.402 22:46:16 -- setup/hugepages.sh@27 -- # local node 00:03:48.402 22:46:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.402 22:46:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.402 22:46:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.402 22:46:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.402 22:46:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.402 22:46:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.402 22:46:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.402 22:46:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.402 22:46:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.402 22:46:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.402 22:46:16 -- setup/common.sh@18 -- # local node=0 00:03:48.402 22:46:16 -- setup/common.sh@19 -- # local var val 00:03:48.402 22:46:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.402 22:46:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.402 22:46:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.402 22:46:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.402 22:46:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.402 22:46:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 51511484 kB' 'MemUsed: 14147524 kB' 'SwapCached: 0 kB' 'Active: 7130924 kB' 'Inactive: 3324860 kB' 'Active(anon): 6981684 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3324860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10220696 kB' 'Mapped: 57508 kB' 'AnonPages: 238872 kB' 'Shmem: 6746596 kB' 'KernelStack: 13224 kB' 'PageTables: 3932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 190672 kB' 'Slab: 721052 kB' 'SReclaimable: 190672 kB' 'SUnreclaim: 530380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.403 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.403 22:46:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.403 22:46:16 -- setup/common.sh@33 -- # echo 0 00:03:48.403 22:46:16 -- setup/common.sh@33 -- # return 0 00:03:48.403 22:46:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.403 22:46:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.403 22:46:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.403 22:46:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.404 22:46:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.404 22:46:16 -- setup/common.sh@18 -- # local node=1 00:03:48.404 22:46:16 -- setup/common.sh@19 -- # local var val 00:03:48.404 22:46:16 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.404 22:46:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.404 22:46:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.404 22:46:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.404 22:46:16 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.404 22:46:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 53875604 kB' 'MemUsed: 6804236 kB' 'SwapCached: 0 kB' 'Active: 4269632 kB' 'Inactive: 189736 kB' 'Active(anon): 4006844 kB' 'Inactive(anon): 0 kB' 'Active(file): 262788 kB' 'Inactive(file): 189736 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4118376 kB' 'Mapped: 153100 kB' 'AnonPages: 341060 kB' 'Shmem: 3665852 kB' 'KernelStack: 14168 kB' 'PageTables: 5160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134080 kB' 'Slab: 472984 kB' 'SReclaimable: 134080 kB' 'SUnreclaim: 338904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.404 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.404 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 22:46:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 22:46:16 -- setup/common.sh@32 -- # continue 00:03:48.405 22:46:16 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.405 22:46:16 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.405 22:46:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.405 22:46:16 -- setup/common.sh@33 -- # echo 0 00:03:48.405 22:46:16 -- setup/common.sh@33 -- # return 0 00:03:48.405 22:46:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.405 22:46:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.405 22:46:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.405 22:46:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.405 22:46:16 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:48.405 node0=512 expecting 512 00:03:48.405 22:46:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.405 22:46:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.405 22:46:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.405 22:46:16 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:48.405 node1=512 expecting 512 00:03:48.405 22:46:16 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:48.405 00:03:48.405 real 0m3.817s 00:03:48.405 user 0m1.500s 00:03:48.405 sys 0m2.369s 00:03:48.405 22:46:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.405 22:46:16 -- common/autotest_common.sh@10 -- # set +x 00:03:48.405 ************************************ 00:03:48.405 END TEST per_node_1G_alloc 00:03:48.405 ************************************ 00:03:48.405 22:46:16 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:48.405 22:46:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:48.405 22:46:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:48.405 22:46:16 -- common/autotest_common.sh@10 -- # set +x 00:03:48.405 ************************************ 00:03:48.405 START TEST even_2G_alloc 00:03:48.405 ************************************ 00:03:48.405 22:46:16 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:48.405 22:46:16 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:48.405 22:46:16 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.405 22:46:16 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.405 22:46:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.405 22:46:16 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.405 22:46:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.405 22:46:16 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.405 22:46:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.405 22:46:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.405 22:46:16 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.405 22:46:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.405 22:46:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.405 22:46:16 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.405 22:46:16 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.405 22:46:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.405 22:46:16 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.405 22:46:16 -- setup/hugepages.sh@83 -- # : 512 00:03:48.405 22:46:16 -- setup/hugepages.sh@84 -- # : 1 00:03:48.405 22:46:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.405 22:46:16 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:48.405 22:46:16 -- setup/hugepages.sh@83 -- # : 0 00:03:48.405 22:46:16 -- setup/hugepages.sh@84 -- # : 0 00:03:48.405 22:46:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.405 22:46:16 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:48.405 22:46:16 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:48.405 22:46:16 -- setup/hugepages.sh@153 -- # setup output 00:03:48.405 22:46:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.405 22:46:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.778 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:51.778 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:51.778 22:46:19 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:51.778 22:46:19 -- setup/hugepages.sh@89 -- # local node 00:03:51.778 22:46:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.778 22:46:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.778 22:46:19 -- setup/hugepages.sh@92 -- # local surp 00:03:51.778 22:46:19 -- setup/hugepages.sh@93 -- # local resv 00:03:51.778 22:46:19 -- setup/hugepages.sh@94 -- # local anon 00:03:51.778 22:46:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.778 22:46:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.778 22:46:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.778 22:46:19 -- setup/common.sh@18 -- # local node= 00:03:51.778 22:46:19 -- setup/common.sh@19 -- # local var val 00:03:51.778 22:46:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.778 22:46:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.778 22:46:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.778 22:46:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.778 22:46:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.778 22:46:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105432248 kB' 'MemAvailable: 108696916 kB' 'Buffers: 2704 kB' 'Cached: 14336468 kB' 'SwapCached: 0 kB' 'Active: 11397580 kB' 'Inactive: 3514596 kB' 'Active(anon): 10985552 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576368 kB' 'Mapped: 210212 kB' 'Shmem: 10412548 kB' 'KReclaimable: 324752 kB' 'Slab: 1193240 kB' 'SReclaimable: 324752 kB' 'SUnreclaim: 868488 kB' 'KernelStack: 27440 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12502260 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.778 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.778 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.779 22:46:19 -- setup/common.sh@33 -- # echo 0 00:03:51.779 22:46:19 -- setup/common.sh@33 -- # return 0 00:03:51.779 22:46:19 -- setup/hugepages.sh@97 -- # anon=0 00:03:51.779 22:46:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.779 22:46:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.779 22:46:19 -- setup/common.sh@18 -- # local node= 00:03:51.779 22:46:19 -- setup/common.sh@19 -- # local var val 00:03:51.779 22:46:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.779 22:46:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.779 22:46:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.779 22:46:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.779 22:46:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.779 22:46:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105435952 kB' 'MemAvailable: 108700620 kB' 'Buffers: 2704 kB' 'Cached: 14336472 kB' 'SwapCached: 0 kB' 'Active: 11396756 kB' 'Inactive: 3514596 kB' 'Active(anon): 10984728 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575456 kB' 'Mapped: 210208 kB' 'Shmem: 10412552 kB' 'KReclaimable: 324752 kB' 'Slab: 1193200 kB' 'SReclaimable: 324752 kB' 'SUnreclaim: 868448 kB' 'KernelStack: 27200 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12502272 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.779 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.779 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.780 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.780 22:46:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.780 22:46:19 -- setup/common.sh@33 -- # echo 0 00:03:51.780 22:46:19 -- setup/common.sh@33 -- # return 0 00:03:51.780 22:46:19 -- setup/hugepages.sh@99 -- # surp=0 00:03:51.780 22:46:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.780 22:46:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.780 22:46:19 -- setup/common.sh@18 -- # local node= 00:03:51.781 22:46:19 -- setup/common.sh@19 -- # local var val 00:03:51.781 22:46:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.781 22:46:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.781 22:46:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.781 22:46:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.781 22:46:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.781 22:46:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105434872 kB' 'MemAvailable: 108699540 kB' 'Buffers: 2704 kB' 'Cached: 14336484 kB' 'SwapCached: 0 kB' 'Active: 11396336 kB' 'Inactive: 3514596 kB' 'Active(anon): 10984308 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575024 kB' 'Mapped: 210132 kB' 'Shmem: 10412564 kB' 'KReclaimable: 324752 kB' 'Slab: 1193136 kB' 'SReclaimable: 324752 kB' 'SUnreclaim: 868384 kB' 'KernelStack: 27392 kB' 'PageTables: 8652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12502288 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.781 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.781 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.782 22:46:19 -- setup/common.sh@33 -- # echo 0 00:03:51.782 22:46:19 -- setup/common.sh@33 -- # return 0 00:03:51.782 22:46:19 -- setup/hugepages.sh@100 -- # resv=0 00:03:51.782 22:46:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:51.782 nr_hugepages=1024 00:03:51.782 22:46:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.782 resv_hugepages=0 00:03:51.782 22:46:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.782 surplus_hugepages=0 00:03:51.782 22:46:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.782 anon_hugepages=0 00:03:51.782 22:46:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.782 22:46:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:51.782 22:46:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.782 22:46:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.782 22:46:19 -- setup/common.sh@18 -- # local node= 00:03:51.782 22:46:19 -- setup/common.sh@19 -- # local var val 00:03:51.782 22:46:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.782 22:46:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.782 22:46:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.782 22:46:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.782 22:46:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.782 22:46:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105434048 kB' 'MemAvailable: 108698716 kB' 'Buffers: 2704 kB' 'Cached: 14336496 kB' 'SwapCached: 0 kB' 'Active: 11396052 kB' 'Inactive: 3514596 kB' 'Active(anon): 10984024 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574676 kB' 'Mapped: 210132 kB' 'Shmem: 10412576 kB' 'KReclaimable: 324752 kB' 'Slab: 1193136 kB' 'SReclaimable: 324752 kB' 'SUnreclaim: 868384 kB' 'KernelStack: 27312 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12502300 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.782 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.782 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.783 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.783 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.046 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.046 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.047 22:46:19 -- setup/common.sh@33 -- # echo 1024 00:03:52.047 22:46:19 -- setup/common.sh@33 -- # return 0 00:03:52.047 22:46:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.047 22:46:19 -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.047 22:46:19 -- setup/hugepages.sh@27 -- # local node 00:03:52.047 22:46:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.047 22:46:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.047 22:46:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.047 22:46:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.047 22:46:19 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.047 22:46:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.047 22:46:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.047 22:46:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.047 22:46:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.047 22:46:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.047 22:46:19 -- setup/common.sh@18 -- # local node=0 00:03:52.047 22:46:19 -- setup/common.sh@19 -- # local var val 00:03:52.047 22:46:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.047 22:46:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.047 22:46:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.047 22:46:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.047 22:46:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.047 22:46:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 51542316 kB' 'MemUsed: 14116692 kB' 'SwapCached: 0 kB' 'Active: 7126236 kB' 'Inactive: 3324860 kB' 'Active(anon): 6976996 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3324860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10220780 kB' 'Mapped: 57004 kB' 'AnonPages: 233528 kB' 'Shmem: 6746680 kB' 'KernelStack: 13208 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 190672 kB' 'Slab: 720332 kB' 'SReclaimable: 190672 kB' 'SUnreclaim: 529660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.047 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.047 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@33 -- # echo 0 00:03:52.048 22:46:19 -- setup/common.sh@33 -- # return 0 00:03:52.048 22:46:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.048 22:46:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.048 22:46:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.048 22:46:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.048 22:46:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.048 22:46:19 -- setup/common.sh@18 -- # local node=1 00:03:52.048 22:46:19 -- setup/common.sh@19 -- # local var val 00:03:52.048 22:46:19 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.048 22:46:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.048 22:46:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.048 22:46:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.048 22:46:19 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.048 22:46:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 53891368 kB' 'MemUsed: 6788472 kB' 'SwapCached: 0 kB' 'Active: 4270720 kB' 'Inactive: 189736 kB' 'Active(anon): 4007932 kB' 'Inactive(anon): 0 kB' 'Active(file): 262788 kB' 'Inactive(file): 189736 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4118420 kB' 'Mapped: 153128 kB' 'AnonPages: 342088 kB' 'Shmem: 3665896 kB' 'KernelStack: 14072 kB' 'PageTables: 5208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134080 kB' 'Slab: 472804 kB' 'SReclaimable: 134080 kB' 'SUnreclaim: 338724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.048 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.048 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # continue 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.049 22:46:19 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.049 22:46:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.049 22:46:19 -- setup/common.sh@33 -- # echo 0 00:03:52.049 22:46:19 -- setup/common.sh@33 -- # return 0 00:03:52.049 22:46:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.049 22:46:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.049 22:46:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.049 22:46:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.049 22:46:19 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.049 node0=512 expecting 512 00:03:52.049 22:46:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.049 22:46:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.049 22:46:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.049 22:46:19 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:52.049 node1=512 expecting 512 00:03:52.049 22:46:19 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:52.049 00:03:52.049 real 0m3.563s 00:03:52.049 user 0m1.395s 00:03:52.049 sys 0m2.205s 00:03:52.049 22:46:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.049 22:46:19 -- common/autotest_common.sh@10 -- # set +x 00:03:52.049 ************************************ 00:03:52.049 END TEST even_2G_alloc 00:03:52.049 ************************************ 00:03:52.049 22:46:20 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:52.049 22:46:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:52.049 22:46:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:52.049 22:46:20 -- common/autotest_common.sh@10 -- # set +x 00:03:52.049 ************************************ 00:03:52.049 START TEST odd_alloc 00:03:52.049 ************************************ 00:03:52.049 22:46:20 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:52.049 22:46:20 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:52.049 22:46:20 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:52.049 22:46:20 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.049 22:46:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.049 22:46:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:52.049 22:46:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.049 22:46:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.049 22:46:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.049 22:46:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:52.049 22:46:20 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.049 22:46:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.049 22:46:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.049 22:46:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.049 22:46:20 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.049 22:46:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.049 22:46:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.049 22:46:20 -- setup/hugepages.sh@83 -- # : 513 00:03:52.049 22:46:20 -- setup/hugepages.sh@84 -- # : 1 00:03:52.049 22:46:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.049 22:46:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:52.049 22:46:20 -- setup/hugepages.sh@83 -- # : 0 00:03:52.049 22:46:20 -- setup/hugepages.sh@84 -- # : 0 00:03:52.049 22:46:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.049 22:46:20 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:52.049 22:46:20 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:52.049 22:46:20 -- setup/hugepages.sh@160 -- # setup output 00:03:52.049 22:46:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.049 22:46:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.355 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:55.355 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:55.355 22:46:23 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:55.355 22:46:23 -- setup/hugepages.sh@89 -- # local node 00:03:55.355 22:46:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.355 22:46:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.355 22:46:23 -- setup/hugepages.sh@92 -- # local surp 00:03:55.355 22:46:23 -- setup/hugepages.sh@93 -- # local resv 00:03:55.355 22:46:23 -- setup/hugepages.sh@94 -- # local anon 00:03:55.355 22:46:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.355 22:46:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.355 22:46:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.355 22:46:23 -- setup/common.sh@18 -- # local node= 00:03:55.355 22:46:23 -- setup/common.sh@19 -- # local var val 00:03:55.355 22:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.355 22:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.355 22:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.355 22:46:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.355 22:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.355 22:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105416972 kB' 'MemAvailable: 108681640 kB' 'Buffers: 2704 kB' 'Cached: 14336616 kB' 'SwapCached: 0 kB' 'Active: 11399092 kB' 'Inactive: 3514596 kB' 'Active(anon): 10987064 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577776 kB' 'Mapped: 210268 kB' 'Shmem: 10412696 kB' 'KReclaimable: 324752 kB' 'Slab: 1193780 kB' 'SReclaimable: 324752 kB' 'SUnreclaim: 869028 kB' 'KernelStack: 27568 kB' 'PageTables: 9072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12503056 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.355 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.355 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.356 22:46:23 -- setup/common.sh@33 -- # echo 0 00:03:55.356 22:46:23 -- setup/common.sh@33 -- # return 0 00:03:55.356 22:46:23 -- setup/hugepages.sh@97 -- # anon=0 00:03:55.356 22:46:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.356 22:46:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.356 22:46:23 -- setup/common.sh@18 -- # local node= 00:03:55.356 22:46:23 -- setup/common.sh@19 -- # local var val 00:03:55.356 22:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.356 22:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.356 22:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.356 22:46:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.356 22:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.356 22:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105418220 kB' 'MemAvailable: 108682888 kB' 'Buffers: 2704 kB' 'Cached: 14336620 kB' 'SwapCached: 0 kB' 'Active: 11399336 kB' 'Inactive: 3514596 kB' 'Active(anon): 10987308 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577944 kB' 'Mapped: 210260 kB' 'Shmem: 10412700 kB' 'KReclaimable: 324752 kB' 'Slab: 1193756 kB' 'SReclaimable: 324752 kB' 'SUnreclaim: 869004 kB' 'KernelStack: 27424 kB' 'PageTables: 9656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12501424 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235588 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.356 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.356 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.357 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.357 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.358 22:46:23 -- setup/common.sh@33 -- # echo 0 00:03:55.358 22:46:23 -- setup/common.sh@33 -- # return 0 00:03:55.358 22:46:23 -- setup/hugepages.sh@99 -- # surp=0 00:03:55.358 22:46:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.358 22:46:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.358 22:46:23 -- setup/common.sh@18 -- # local node= 00:03:55.358 22:46:23 -- setup/common.sh@19 -- # local var val 00:03:55.358 22:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.358 22:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.358 22:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.358 22:46:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.358 22:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.358 22:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105419780 kB' 'MemAvailable: 108684444 kB' 'Buffers: 2704 kB' 'Cached: 14336620 kB' 'SwapCached: 0 kB' 'Active: 11398920 kB' 'Inactive: 3514596 kB' 'Active(anon): 10986892 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577504 kB' 'Mapped: 210244 kB' 'Shmem: 10412700 kB' 'KReclaimable: 324744 kB' 'Slab: 1193784 kB' 'SReclaimable: 324744 kB' 'SUnreclaim: 869040 kB' 'KernelStack: 27296 kB' 'PageTables: 9312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12498152 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.358 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.358 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.359 22:46:23 -- setup/common.sh@33 -- # echo 0 00:03:55.359 22:46:23 -- setup/common.sh@33 -- # return 0 00:03:55.359 22:46:23 -- setup/hugepages.sh@100 -- # resv=0 00:03:55.359 22:46:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:55.359 nr_hugepages=1025 00:03:55.359 22:46:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.359 resv_hugepages=0 00:03:55.359 22:46:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.359 surplus_hugepages=0 00:03:55.359 22:46:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.359 anon_hugepages=0 00:03:55.359 22:46:23 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.359 22:46:23 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:55.359 22:46:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.359 22:46:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.359 22:46:23 -- setup/common.sh@18 -- # local node= 00:03:55.359 22:46:23 -- setup/common.sh@19 -- # local var val 00:03:55.359 22:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.359 22:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.359 22:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.359 22:46:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.359 22:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.359 22:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105419888 kB' 'MemAvailable: 108684552 kB' 'Buffers: 2704 kB' 'Cached: 14336644 kB' 'SwapCached: 0 kB' 'Active: 11397796 kB' 'Inactive: 3514596 kB' 'Active(anon): 10985768 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576416 kB' 'Mapped: 210184 kB' 'Shmem: 10412724 kB' 'KReclaimable: 324744 kB' 'Slab: 1193344 kB' 'SReclaimable: 324744 kB' 'SUnreclaim: 868600 kB' 'KernelStack: 27232 kB' 'PageTables: 9100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 12498168 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.359 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.359 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.360 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.360 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.361 22:46:23 -- setup/common.sh@33 -- # echo 1025 00:03:55.361 22:46:23 -- setup/common.sh@33 -- # return 0 00:03:55.361 22:46:23 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.361 22:46:23 -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.361 22:46:23 -- setup/hugepages.sh@27 -- # local node 00:03:55.361 22:46:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.361 22:46:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.361 22:46:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.361 22:46:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:55.361 22:46:23 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.361 22:46:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.361 22:46:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.361 22:46:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.361 22:46:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.361 22:46:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.361 22:46:23 -- setup/common.sh@18 -- # local node=0 00:03:55.361 22:46:23 -- setup/common.sh@19 -- # local var val 00:03:55.361 22:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.361 22:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.361 22:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.361 22:46:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.361 22:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.361 22:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 51523576 kB' 'MemUsed: 14135432 kB' 'SwapCached: 0 kB' 'Active: 7127404 kB' 'Inactive: 3324860 kB' 'Active(anon): 6978164 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3324860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10220936 kB' 'Mapped: 57040 kB' 'AnonPages: 234548 kB' 'Shmem: 6746836 kB' 'KernelStack: 13240 kB' 'PageTables: 3976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 190672 kB' 'Slab: 720400 kB' 'SReclaimable: 190672 kB' 'SUnreclaim: 529728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.361 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.361 22:46:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@33 -- # echo 0 00:03:55.362 22:46:23 -- setup/common.sh@33 -- # return 0 00:03:55.362 22:46:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.362 22:46:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.362 22:46:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.362 22:46:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:55.362 22:46:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.362 22:46:23 -- setup/common.sh@18 -- # local node=1 00:03:55.362 22:46:23 -- setup/common.sh@19 -- # local var val 00:03:55.362 22:46:23 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.362 22:46:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.362 22:46:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:55.362 22:46:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:55.362 22:46:23 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.362 22:46:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 53896644 kB' 'MemUsed: 6783196 kB' 'SwapCached: 0 kB' 'Active: 4270420 kB' 'Inactive: 189736 kB' 'Active(anon): 4007632 kB' 'Inactive(anon): 0 kB' 'Active(file): 262788 kB' 'Inactive(file): 189736 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4118428 kB' 'Mapped: 153144 kB' 'AnonPages: 341860 kB' 'Shmem: 3665904 kB' 'KernelStack: 13992 kB' 'PageTables: 5124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134072 kB' 'Slab: 472944 kB' 'SReclaimable: 134072 kB' 'SUnreclaim: 338872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.362 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.362 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # continue 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.363 22:46:23 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.363 22:46:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.363 22:46:23 -- setup/common.sh@33 -- # echo 0 00:03:55.363 22:46:23 -- setup/common.sh@33 -- # return 0 00:03:55.363 22:46:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.363 22:46:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.363 22:46:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.363 22:46:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.363 22:46:23 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:55.363 node0=512 expecting 513 00:03:55.363 22:46:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.363 22:46:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.363 22:46:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.363 22:46:23 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:55.363 node1=513 expecting 512 00:03:55.363 22:46:23 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:55.363 00:03:55.363 real 0m3.480s 00:03:55.363 user 0m1.347s 00:03:55.363 sys 0m2.140s 00:03:55.363 22:46:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.363 22:46:23 -- common/autotest_common.sh@10 -- # set +x 00:03:55.363 ************************************ 00:03:55.363 END TEST odd_alloc 00:03:55.363 ************************************ 00:03:55.625 22:46:23 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:55.625 22:46:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:55.625 22:46:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:55.625 22:46:23 -- common/autotest_common.sh@10 -- # set +x 00:03:55.625 ************************************ 00:03:55.625 START TEST custom_alloc 00:03:55.625 ************************************ 00:03:55.625 22:46:23 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:55.625 22:46:23 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:55.625 22:46:23 -- setup/hugepages.sh@169 -- # local node 00:03:55.625 22:46:23 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:55.625 22:46:23 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:55.625 22:46:23 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:55.625 22:46:23 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:55.625 22:46:23 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:55.625 22:46:23 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:55.625 22:46:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.625 22:46:23 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.625 22:46:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.625 22:46:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:55.625 22:46:23 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.625 22:46:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.625 22:46:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.625 22:46:23 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:55.625 22:46:23 -- setup/hugepages.sh@83 -- # : 256 00:03:55.625 22:46:23 -- setup/hugepages.sh@84 -- # : 1 00:03:55.625 22:46:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:55.625 22:46:23 -- setup/hugepages.sh@83 -- # : 0 00:03:55.625 22:46:23 -- setup/hugepages.sh@84 -- # : 0 00:03:55.625 22:46:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:55.625 22:46:23 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:55.625 22:46:23 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.625 22:46:23 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.625 22:46:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.625 22:46:23 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.625 22:46:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.625 22:46:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.625 22:46:23 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.625 22:46:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.625 22:46:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.625 22:46:23 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.625 22:46:23 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:55.625 22:46:23 -- setup/hugepages.sh@78 -- # return 0 00:03:55.625 22:46:23 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:55.625 22:46:23 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:55.625 22:46:23 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:55.625 22:46:23 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:55.625 22:46:23 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:55.625 22:46:23 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:55.625 22:46:23 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.625 22:46:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.625 22:46:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.625 22:46:23 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.625 22:46:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.625 22:46:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.625 22:46:23 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:55.625 22:46:23 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.625 22:46:23 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:55.625 22:46:23 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.625 22:46:23 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:55.625 22:46:23 -- setup/hugepages.sh@78 -- # return 0 00:03:55.625 22:46:23 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:55.625 22:46:23 -- setup/hugepages.sh@187 -- # setup output 00:03:55.625 22:46:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.625 22:46:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.176 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:58.176 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:58.176 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:58.176 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:58.176 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:58.176 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:58.176 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:58.176 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:58.176 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:58.176 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:58.176 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:58.176 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:58.176 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:58.176 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:58.176 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:58.176 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:58.176 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:58.438 22:46:26 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:58.438 22:46:26 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:58.438 22:46:26 -- setup/hugepages.sh@89 -- # local node 00:03:58.438 22:46:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.438 22:46:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.438 22:46:26 -- setup/hugepages.sh@92 -- # local surp 00:03:58.438 22:46:26 -- setup/hugepages.sh@93 -- # local resv 00:03:58.438 22:46:26 -- setup/hugepages.sh@94 -- # local anon 00:03:58.438 22:46:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.438 22:46:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.438 22:46:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.438 22:46:26 -- setup/common.sh@18 -- # local node= 00:03:58.438 22:46:26 -- setup/common.sh@19 -- # local var val 00:03:58.438 22:46:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.438 22:46:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.438 22:46:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.438 22:46:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.438 22:46:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.438 22:46:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.438 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.438 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104356008 kB' 'MemAvailable: 107620672 kB' 'Buffers: 2704 kB' 'Cached: 14336744 kB' 'SwapCached: 0 kB' 'Active: 11398272 kB' 'Inactive: 3514596 kB' 'Active(anon): 10986244 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576268 kB' 'Mapped: 209200 kB' 'Shmem: 10412824 kB' 'KReclaimable: 324744 kB' 'Slab: 1193968 kB' 'SReclaimable: 324744 kB' 'SUnreclaim: 869224 kB' 'KernelStack: 27328 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12468936 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235620 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.439 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.439 22:46:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.705 22:46:26 -- setup/common.sh@33 -- # echo 0 00:03:58.705 22:46:26 -- setup/common.sh@33 -- # return 0 00:03:58.705 22:46:26 -- setup/hugepages.sh@97 -- # anon=0 00:03:58.705 22:46:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.705 22:46:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.705 22:46:26 -- setup/common.sh@18 -- # local node= 00:03:58.705 22:46:26 -- setup/common.sh@19 -- # local var val 00:03:58.705 22:46:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.705 22:46:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.705 22:46:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.705 22:46:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.705 22:46:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.705 22:46:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104357772 kB' 'MemAvailable: 107622436 kB' 'Buffers: 2704 kB' 'Cached: 14336752 kB' 'SwapCached: 0 kB' 'Active: 11398940 kB' 'Inactive: 3514596 kB' 'Active(anon): 10986912 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576776 kB' 'Mapped: 209212 kB' 'Shmem: 10412832 kB' 'KReclaimable: 324744 kB' 'Slab: 1193560 kB' 'SReclaimable: 324744 kB' 'SUnreclaim: 868816 kB' 'KernelStack: 27408 kB' 'PageTables: 8924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12468952 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.705 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.705 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.706 22:46:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.706 22:46:26 -- setup/common.sh@33 -- # echo 0 00:03:58.706 22:46:26 -- setup/common.sh@33 -- # return 0 00:03:58.706 22:46:26 -- setup/hugepages.sh@99 -- # surp=0 00:03:58.706 22:46:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.706 22:46:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.706 22:46:26 -- setup/common.sh@18 -- # local node= 00:03:58.706 22:46:26 -- setup/common.sh@19 -- # local var val 00:03:58.706 22:46:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.706 22:46:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.706 22:46:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.706 22:46:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.706 22:46:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.706 22:46:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.706 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104357396 kB' 'MemAvailable: 107622060 kB' 'Buffers: 2704 kB' 'Cached: 14336768 kB' 'SwapCached: 0 kB' 'Active: 11397848 kB' 'Inactive: 3514596 kB' 'Active(anon): 10985820 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576076 kB' 'Mapped: 209112 kB' 'Shmem: 10412848 kB' 'KReclaimable: 324744 kB' 'Slab: 1193548 kB' 'SReclaimable: 324744 kB' 'SUnreclaim: 868804 kB' 'KernelStack: 27392 kB' 'PageTables: 9164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12469104 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235732 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.707 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.707 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.708 22:46:26 -- setup/common.sh@33 -- # echo 0 00:03:58.708 22:46:26 -- setup/common.sh@33 -- # return 0 00:03:58.708 22:46:26 -- setup/hugepages.sh@100 -- # resv=0 00:03:58.708 22:46:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:58.708 nr_hugepages=1536 00:03:58.708 22:46:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.708 resv_hugepages=0 00:03:58.708 22:46:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.708 surplus_hugepages=0 00:03:58.708 22:46:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.708 anon_hugepages=0 00:03:58.708 22:46:26 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:58.708 22:46:26 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:58.708 22:46:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.708 22:46:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.708 22:46:26 -- setup/common.sh@18 -- # local node= 00:03:58.708 22:46:26 -- setup/common.sh@19 -- # local var val 00:03:58.708 22:46:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.708 22:46:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.708 22:46:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.708 22:46:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.708 22:46:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.708 22:46:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 104360456 kB' 'MemAvailable: 107625120 kB' 'Buffers: 2704 kB' 'Cached: 14336784 kB' 'SwapCached: 0 kB' 'Active: 11398316 kB' 'Inactive: 3514596 kB' 'Active(anon): 10986288 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576432 kB' 'Mapped: 209112 kB' 'Shmem: 10412864 kB' 'KReclaimable: 324744 kB' 'Slab: 1193644 kB' 'SReclaimable: 324744 kB' 'SUnreclaim: 868900 kB' 'KernelStack: 27312 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 12467476 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235684 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.708 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.708 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.709 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.709 22:46:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.709 22:46:26 -- setup/common.sh@33 -- # echo 1536 00:03:58.709 22:46:26 -- setup/common.sh@33 -- # return 0 00:03:58.709 22:46:26 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:58.709 22:46:26 -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.709 22:46:26 -- setup/hugepages.sh@27 -- # local node 00:03:58.709 22:46:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.709 22:46:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:58.709 22:46:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.709 22:46:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.709 22:46:26 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.709 22:46:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.709 22:46:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.710 22:46:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.710 22:46:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.710 22:46:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.710 22:46:26 -- setup/common.sh@18 -- # local node=0 00:03:58.710 22:46:26 -- setup/common.sh@19 -- # local var val 00:03:58.710 22:46:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.710 22:46:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.710 22:46:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.710 22:46:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.710 22:46:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.710 22:46:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 51531884 kB' 'MemUsed: 14127124 kB' 'SwapCached: 0 kB' 'Active: 7126720 kB' 'Inactive: 3324860 kB' 'Active(anon): 6977480 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3324860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10221024 kB' 'Mapped: 56764 kB' 'AnonPages: 233712 kB' 'Shmem: 6746924 kB' 'KernelStack: 13128 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 190672 kB' 'Slab: 720384 kB' 'SReclaimable: 190672 kB' 'SUnreclaim: 529712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.710 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.710 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@33 -- # echo 0 00:03:58.711 22:46:26 -- setup/common.sh@33 -- # return 0 00:03:58.711 22:46:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.711 22:46:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.711 22:46:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.711 22:46:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:58.711 22:46:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.711 22:46:26 -- setup/common.sh@18 -- # local node=1 00:03:58.711 22:46:26 -- setup/common.sh@19 -- # local var val 00:03:58.711 22:46:26 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.711 22:46:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.711 22:46:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:58.711 22:46:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:58.711 22:46:26 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.711 22:46:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 52827496 kB' 'MemUsed: 7852344 kB' 'SwapCached: 0 kB' 'Active: 4272088 kB' 'Inactive: 189736 kB' 'Active(anon): 4009300 kB' 'Inactive(anon): 0 kB' 'Active(file): 262788 kB' 'Inactive(file): 189736 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4118480 kB' 'Mapped: 152348 kB' 'AnonPages: 343424 kB' 'Shmem: 3665956 kB' 'KernelStack: 14104 kB' 'PageTables: 5412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 134072 kB' 'Slab: 473260 kB' 'SReclaimable: 134072 kB' 'SUnreclaim: 339188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.711 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.711 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.712 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.712 22:46:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.712 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.712 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.712 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.712 22:46:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.712 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.712 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.712 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.712 22:46:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.712 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.712 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.712 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.712 22:46:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.712 22:46:26 -- setup/common.sh@32 -- # continue 00:03:58.712 22:46:26 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.712 22:46:26 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.712 22:46:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.712 22:46:26 -- setup/common.sh@33 -- # echo 0 00:03:58.712 22:46:26 -- setup/common.sh@33 -- # return 0 00:03:58.712 22:46:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.712 22:46:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.712 22:46:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.712 22:46:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.712 22:46:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:58.712 node0=512 expecting 512 00:03:58.712 22:46:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.712 22:46:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.712 22:46:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.712 22:46:26 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:58.712 node1=1024 expecting 1024 00:03:58.712 22:46:26 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:58.712 00:03:58.712 real 0m3.228s 00:03:58.712 user 0m1.123s 00:03:58.712 sys 0m2.028s 00:03:58.712 22:46:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.712 22:46:26 -- common/autotest_common.sh@10 -- # set +x 00:03:58.712 ************************************ 00:03:58.712 END TEST custom_alloc 00:03:58.712 ************************************ 00:03:58.712 22:46:26 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:58.712 22:46:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:58.712 22:46:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:58.712 22:46:26 -- common/autotest_common.sh@10 -- # set +x 00:03:58.712 ************************************ 00:03:58.712 START TEST no_shrink_alloc 00:03:58.712 ************************************ 00:03:58.712 22:46:26 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:58.712 22:46:26 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:58.712 22:46:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.712 22:46:26 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:58.712 22:46:26 -- setup/hugepages.sh@51 -- # shift 00:03:58.712 22:46:26 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:58.712 22:46:26 -- setup/hugepages.sh@52 -- # local node_ids 00:03:58.712 22:46:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.712 22:46:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.712 22:46:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:58.712 22:46:26 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:58.712 22:46:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.712 22:46:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.712 22:46:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.712 22:46:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.712 22:46:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.712 22:46:26 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:58.712 22:46:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.712 22:46:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:58.712 22:46:26 -- setup/hugepages.sh@73 -- # return 0 00:03:58.712 22:46:26 -- setup/hugepages.sh@198 -- # setup output 00:03:58.712 22:46:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.712 22:46:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.018 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.018 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.018 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.018 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.018 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.018 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.018 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.018 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.018 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:02.018 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:02.018 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:02.018 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:02.018 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:02.018 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:02.018 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:02.018 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:02.018 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:02.280 22:46:30 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:02.280 22:46:30 -- setup/hugepages.sh@89 -- # local node 00:04:02.280 22:46:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.280 22:46:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.280 22:46:30 -- setup/hugepages.sh@92 -- # local surp 00:04:02.280 22:46:30 -- setup/hugepages.sh@93 -- # local resv 00:04:02.280 22:46:30 -- setup/hugepages.sh@94 -- # local anon 00:04:02.280 22:46:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.280 22:46:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.280 22:46:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.280 22:46:30 -- setup/common.sh@18 -- # local node= 00:04:02.280 22:46:30 -- setup/common.sh@19 -- # local var val 00:04:02.280 22:46:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.280 22:46:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.280 22:46:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.280 22:46:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.280 22:46:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.280 22:46:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 22:46:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105396268 kB' 'MemAvailable: 108660932 kB' 'Buffers: 2704 kB' 'Cached: 14336904 kB' 'SwapCached: 0 kB' 'Active: 11399548 kB' 'Inactive: 3514596 kB' 'Active(anon): 10987520 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577740 kB' 'Mapped: 209224 kB' 'Shmem: 10412984 kB' 'KReclaimable: 324744 kB' 'Slab: 1194180 kB' 'SReclaimable: 324744 kB' 'SUnreclaim: 869436 kB' 'KernelStack: 27136 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12465320 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.280 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.280 22:46:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.281 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.281 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.546 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.546 22:46:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.547 22:46:30 -- setup/common.sh@33 -- # echo 0 00:04:02.547 22:46:30 -- setup/common.sh@33 -- # return 0 00:04:02.547 22:46:30 -- setup/hugepages.sh@97 -- # anon=0 00:04:02.547 22:46:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.547 22:46:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.547 22:46:30 -- setup/common.sh@18 -- # local node= 00:04:02.547 22:46:30 -- setup/common.sh@19 -- # local var val 00:04:02.547 22:46:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.547 22:46:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.547 22:46:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.547 22:46:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.547 22:46:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.547 22:46:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105398108 kB' 'MemAvailable: 108662772 kB' 'Buffers: 2704 kB' 'Cached: 14336908 kB' 'SwapCached: 0 kB' 'Active: 11400212 kB' 'Inactive: 3514596 kB' 'Active(anon): 10988184 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578468 kB' 'Mapped: 209224 kB' 'Shmem: 10412988 kB' 'KReclaimable: 324744 kB' 'Slab: 1194176 kB' 'SReclaimable: 324744 kB' 'SUnreclaim: 869432 kB' 'KernelStack: 27168 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12467640 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235460 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.547 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.548 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.548 22:46:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.548 22:46:30 -- setup/common.sh@33 -- # echo 0 00:04:02.548 22:46:30 -- setup/common.sh@33 -- # return 0 00:04:02.548 22:46:30 -- setup/hugepages.sh@99 -- # surp=0 00:04:02.548 22:46:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.548 22:46:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.548 22:46:30 -- setup/common.sh@18 -- # local node= 00:04:02.548 22:46:30 -- setup/common.sh@19 -- # local var val 00:04:02.548 22:46:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.548 22:46:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.548 22:46:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.548 22:46:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.548 22:46:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.548 22:46:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.549 22:46:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105398060 kB' 'MemAvailable: 108662724 kB' 'Buffers: 2704 kB' 'Cached: 14336920 kB' 'SwapCached: 0 kB' 'Active: 11400244 kB' 'Inactive: 3514596 kB' 'Active(anon): 10988216 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578896 kB' 'Mapped: 209648 kB' 'Shmem: 10413000 kB' 'KReclaimable: 324744 kB' 'Slab: 1194172 kB' 'SReclaimable: 324744 kB' 'SUnreclaim: 869428 kB' 'KernelStack: 27072 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12467232 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235412 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.549 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.549 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.550 22:46:30 -- setup/common.sh@33 -- # echo 0 00:04:02.550 22:46:30 -- setup/common.sh@33 -- # return 0 00:04:02.550 22:46:30 -- setup/hugepages.sh@100 -- # resv=0 00:04:02.550 22:46:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.550 nr_hugepages=1024 00:04:02.550 22:46:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.550 resv_hugepages=0 00:04:02.550 22:46:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.550 surplus_hugepages=0 00:04:02.550 22:46:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.550 anon_hugepages=0 00:04:02.550 22:46:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.550 22:46:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.550 22:46:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.550 22:46:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.550 22:46:30 -- setup/common.sh@18 -- # local node= 00:04:02.550 22:46:30 -- setup/common.sh@19 -- # local var val 00:04:02.550 22:46:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.550 22:46:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.550 22:46:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.550 22:46:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.550 22:46:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.550 22:46:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105391508 kB' 'MemAvailable: 108656172 kB' 'Buffers: 2704 kB' 'Cached: 14336932 kB' 'SwapCached: 0 kB' 'Active: 11404304 kB' 'Inactive: 3514596 kB' 'Active(anon): 10992276 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 582932 kB' 'Mapped: 209652 kB' 'Shmem: 10413012 kB' 'KReclaimable: 324744 kB' 'Slab: 1194172 kB' 'SReclaimable: 324744 kB' 'SUnreclaim: 869428 kB' 'KernelStack: 27104 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12471484 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235400 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.550 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.550 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.551 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.551 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.552 22:46:30 -- setup/common.sh@33 -- # echo 1024 00:04:02.552 22:46:30 -- setup/common.sh@33 -- # return 0 00:04:02.552 22:46:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.552 22:46:30 -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.552 22:46:30 -- setup/hugepages.sh@27 -- # local node 00:04:02.552 22:46:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.552 22:46:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.552 22:46:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.552 22:46:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:02.552 22:46:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.552 22:46:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.552 22:46:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.552 22:46:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.552 22:46:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.552 22:46:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.552 22:46:30 -- setup/common.sh@18 -- # local node=0 00:04:02.552 22:46:30 -- setup/common.sh@19 -- # local var val 00:04:02.552 22:46:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.552 22:46:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.552 22:46:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.552 22:46:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.552 22:46:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.552 22:46:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.552 22:46:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 50492388 kB' 'MemUsed: 15166620 kB' 'SwapCached: 0 kB' 'Active: 7126660 kB' 'Inactive: 3324860 kB' 'Active(anon): 6977420 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3324860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10221108 kB' 'Mapped: 57672 kB' 'AnonPages: 233628 kB' 'Shmem: 6747008 kB' 'KernelStack: 13176 kB' 'PageTables: 3832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 190672 kB' 'Slab: 720660 kB' 'SReclaimable: 190672 kB' 'SUnreclaim: 529988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.552 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.552 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # continue 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.553 22:46:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.553 22:46:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.553 22:46:30 -- setup/common.sh@33 -- # echo 0 00:04:02.553 22:46:30 -- setup/common.sh@33 -- # return 0 00:04:02.553 22:46:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.553 22:46:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.553 22:46:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.553 22:46:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.553 22:46:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.553 node0=1024 expecting 1024 00:04:02.553 22:46:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.553 22:46:30 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:02.553 22:46:30 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:02.553 22:46:30 -- setup/hugepages.sh@202 -- # setup output 00:04:02.553 22:46:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.553 22:46:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.930 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:05.930 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:05.930 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:05.930 22:46:33 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:05.930 22:46:33 -- setup/hugepages.sh@89 -- # local node 00:04:05.930 22:46:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.930 22:46:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.930 22:46:33 -- setup/hugepages.sh@92 -- # local surp 00:04:05.930 22:46:33 -- setup/hugepages.sh@93 -- # local resv 00:04:05.930 22:46:33 -- setup/hugepages.sh@94 -- # local anon 00:04:05.930 22:46:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.930 22:46:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.930 22:46:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.930 22:46:33 -- setup/common.sh@18 -- # local node= 00:04:05.930 22:46:33 -- setup/common.sh@19 -- # local var val 00:04:05.930 22:46:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.930 22:46:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.930 22:46:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.930 22:46:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.930 22:46:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.930 22:46:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105391040 kB' 'MemAvailable: 108655704 kB' 'Buffers: 2704 kB' 'Cached: 14337032 kB' 'SwapCached: 0 kB' 'Active: 11401028 kB' 'Inactive: 3514596 kB' 'Active(anon): 10989000 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579360 kB' 'Mapped: 209232 kB' 'Shmem: 10413112 kB' 'KReclaimable: 324744 kB' 'Slab: 1194524 kB' 'SReclaimable: 324744 kB' 'SUnreclaim: 869780 kB' 'KernelStack: 27136 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12466232 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235364 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.930 22:46:33 -- setup/common.sh@32 -- # continue 00:04:05.930 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.930 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.931 22:46:34 -- setup/common.sh@33 -- # echo 0 00:04:05.931 22:46:34 -- setup/common.sh@33 -- # return 0 00:04:05.931 22:46:34 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.931 22:46:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.931 22:46:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.931 22:46:34 -- setup/common.sh@18 -- # local node= 00:04:05.931 22:46:34 -- setup/common.sh@19 -- # local var val 00:04:05.931 22:46:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.931 22:46:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.931 22:46:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.931 22:46:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.931 22:46:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.931 22:46:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105391000 kB' 'MemAvailable: 108655664 kB' 'Buffers: 2704 kB' 'Cached: 14337036 kB' 'SwapCached: 0 kB' 'Active: 11400376 kB' 'Inactive: 3514596 kB' 'Active(anon): 10988348 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578660 kB' 'Mapped: 209160 kB' 'Shmem: 10413116 kB' 'KReclaimable: 324744 kB' 'Slab: 1194604 kB' 'SReclaimable: 324744 kB' 'SUnreclaim: 869860 kB' 'KernelStack: 27120 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12466244 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.931 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.931 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.932 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.932 22:46:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.932 22:46:34 -- setup/common.sh@33 -- # echo 0 00:04:05.932 22:46:34 -- setup/common.sh@33 -- # return 0 00:04:05.932 22:46:34 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.932 22:46:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.932 22:46:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.932 22:46:34 -- setup/common.sh@18 -- # local node= 00:04:05.932 22:46:34 -- setup/common.sh@19 -- # local var val 00:04:05.932 22:46:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.932 22:46:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.932 22:46:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.933 22:46:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.933 22:46:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.933 22:46:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105391424 kB' 'MemAvailable: 108656088 kB' 'Buffers: 2704 kB' 'Cached: 14337048 kB' 'SwapCached: 0 kB' 'Active: 11400384 kB' 'Inactive: 3514596 kB' 'Active(anon): 10988356 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578660 kB' 'Mapped: 209160 kB' 'Shmem: 10413128 kB' 'KReclaimable: 324744 kB' 'Slab: 1194604 kB' 'SReclaimable: 324744 kB' 'SUnreclaim: 869860 kB' 'KernelStack: 27120 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12466260 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.933 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.933 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.934 22:46:34 -- setup/common.sh@33 -- # echo 0 00:04:05.934 22:46:34 -- setup/common.sh@33 -- # return 0 00:04:05.934 22:46:34 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.934 22:46:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.934 nr_hugepages=1024 00:04:05.934 22:46:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.934 resv_hugepages=0 00:04:05.934 22:46:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.934 surplus_hugepages=0 00:04:05.934 22:46:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.934 anon_hugepages=0 00:04:05.934 22:46:34 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.934 22:46:34 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.934 22:46:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.934 22:46:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.934 22:46:34 -- setup/common.sh@18 -- # local node= 00:04:05.934 22:46:34 -- setup/common.sh@19 -- # local var val 00:04:05.934 22:46:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.934 22:46:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.934 22:46:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.934 22:46:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.934 22:46:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.934 22:46:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 105391364 kB' 'MemAvailable: 108656028 kB' 'Buffers: 2704 kB' 'Cached: 14337048 kB' 'SwapCached: 0 kB' 'Active: 11400420 kB' 'Inactive: 3514596 kB' 'Active(anon): 10988392 kB' 'Inactive(anon): 0 kB' 'Active(file): 412028 kB' 'Inactive(file): 3514596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578696 kB' 'Mapped: 209160 kB' 'Shmem: 10413128 kB' 'KReclaimable: 324744 kB' 'Slab: 1194604 kB' 'SReclaimable: 324744 kB' 'SUnreclaim: 869860 kB' 'KernelStack: 27136 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 12466276 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 131328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4468084 kB' 'DirectMap2M: 29814784 kB' 'DirectMap1G: 101711872 kB' 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.934 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.934 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.935 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.935 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.936 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.936 22:46:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.936 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.936 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.936 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.936 22:46:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.936 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.936 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.936 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.936 22:46:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.936 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.936 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.936 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.936 22:46:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.936 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.936 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.936 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.936 22:46:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.936 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.936 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.936 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.936 22:46:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.936 22:46:34 -- setup/common.sh@32 -- # continue 00:04:05.936 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.936 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.936 22:46:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.936 22:46:34 -- setup/common.sh@33 -- # echo 1024 00:04:05.936 22:46:34 -- setup/common.sh@33 -- # return 0 00:04:05.936 22:46:34 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.936 22:46:34 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.936 22:46:34 -- setup/hugepages.sh@27 -- # local node 00:04:05.936 22:46:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.936 22:46:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.936 22:46:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.936 22:46:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.936 22:46:34 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.936 22:46:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.936 22:46:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.936 22:46:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.936 22:46:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.936 22:46:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.936 22:46:34 -- setup/common.sh@18 -- # local node=0 00:04:05.936 22:46:34 -- setup/common.sh@19 -- # local var val 00:04:05.936 22:46:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.936 22:46:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.936 22:46:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.936 22:46:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.936 22:46:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.936 22:46:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.936 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.936 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.936 22:46:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 50482528 kB' 'MemUsed: 15176480 kB' 'SwapCached: 0 kB' 'Active: 7127724 kB' 'Inactive: 3324860 kB' 'Active(anon): 6978484 kB' 'Inactive(anon): 0 kB' 'Active(file): 149240 kB' 'Inactive(file): 3324860 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10221200 kB' 'Mapped: 56768 kB' 'AnonPages: 234628 kB' 'Shmem: 6747100 kB' 'KernelStack: 13224 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 190672 kB' 'Slab: 720688 kB' 'SReclaimable: 190672 kB' 'SUnreclaim: 530016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.198 22:46:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.198 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.198 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.198 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.198 22:46:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.198 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.198 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.198 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.198 22:46:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.198 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.198 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.198 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.198 22:46:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.198 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.198 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.198 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.198 22:46:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.198 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.198 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.198 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.198 22:46:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.198 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.198 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.198 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # continue 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.199 22:46:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.199 22:46:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.199 22:46:34 -- setup/common.sh@33 -- # echo 0 00:04:06.199 22:46:34 -- setup/common.sh@33 -- # return 0 00:04:06.199 22:46:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.199 22:46:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.199 22:46:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.199 22:46:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.199 22:46:34 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.199 node0=1024 expecting 1024 00:04:06.199 22:46:34 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.199 00:04:06.199 real 0m7.306s 00:04:06.199 user 0m2.840s 00:04:06.199 sys 0m4.535s 00:04:06.199 22:46:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.199 22:46:34 -- common/autotest_common.sh@10 -- # set +x 00:04:06.199 ************************************ 00:04:06.199 END TEST no_shrink_alloc 00:04:06.199 ************************************ 00:04:06.199 22:46:34 -- setup/hugepages.sh@217 -- # clear_hp 00:04:06.199 22:46:34 -- setup/hugepages.sh@37 -- # local node hp 00:04:06.199 22:46:34 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.199 22:46:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.199 22:46:34 -- setup/hugepages.sh@41 -- # echo 0 00:04:06.199 22:46:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.199 22:46:34 -- setup/hugepages.sh@41 -- # echo 0 00:04:06.199 22:46:34 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.200 22:46:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.200 22:46:34 -- setup/hugepages.sh@41 -- # echo 0 00:04:06.200 22:46:34 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.200 22:46:34 -- setup/hugepages.sh@41 -- # echo 0 00:04:06.200 22:46:34 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:06.200 22:46:34 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:06.200 00:04:06.200 real 0m25.745s 00:04:06.200 user 0m9.906s 00:04:06.200 sys 0m16.001s 00:04:06.200 22:46:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.200 22:46:34 -- common/autotest_common.sh@10 -- # set +x 00:04:06.200 ************************************ 00:04:06.200 END TEST hugepages 00:04:06.200 ************************************ 00:04:06.200 22:46:34 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:06.200 22:46:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:06.200 22:46:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:06.200 22:46:34 -- common/autotest_common.sh@10 -- # set +x 00:04:06.200 ************************************ 00:04:06.200 START TEST driver 00:04:06.200 ************************************ 00:04:06.200 22:46:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:06.200 * Looking for test storage... 00:04:06.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:06.200 22:46:34 -- setup/driver.sh@68 -- # setup reset 00:04:06.200 22:46:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.200 22:46:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:11.498 22:46:38 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:11.498 22:46:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.498 22:46:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.498 22:46:38 -- common/autotest_common.sh@10 -- # set +x 00:04:11.498 ************************************ 00:04:11.498 START TEST guess_driver 00:04:11.498 ************************************ 00:04:11.498 22:46:38 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:11.498 22:46:38 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:11.498 22:46:38 -- setup/driver.sh@47 -- # local fail=0 00:04:11.498 22:46:38 -- setup/driver.sh@49 -- # pick_driver 00:04:11.498 22:46:38 -- setup/driver.sh@36 -- # vfio 00:04:11.498 22:46:38 -- setup/driver.sh@21 -- # local iommu_grups 00:04:11.498 22:46:38 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:11.498 22:46:38 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:11.498 22:46:38 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:11.498 22:46:38 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:11.498 22:46:38 -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:11.498 22:46:38 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:11.498 22:46:38 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:11.498 22:46:38 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:11.498 22:46:38 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:11.498 22:46:38 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:11.498 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:11.498 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:11.498 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:11.498 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:11.498 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:11.498 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:11.498 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:11.498 22:46:38 -- setup/driver.sh@30 -- # return 0 00:04:11.498 22:46:38 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:11.498 22:46:38 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:11.498 22:46:38 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:11.498 22:46:38 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:11.498 Looking for driver=vfio-pci 00:04:11.498 22:46:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.498 22:46:38 -- setup/driver.sh@45 -- # setup output config 00:04:11.498 22:46:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.498 22:46:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:14.044 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.044 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.044 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.305 22:46:42 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.305 22:46:42 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.305 22:46:42 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.876 22:46:42 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:14.876 22:46:42 -- setup/driver.sh@65 -- # setup reset 00:04:14.876 22:46:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.876 22:46:42 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.170 00:04:20.170 real 0m8.689s 00:04:20.170 user 0m2.895s 00:04:20.170 sys 0m5.026s 00:04:20.170 22:46:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.170 22:46:47 -- common/autotest_common.sh@10 -- # set +x 00:04:20.170 ************************************ 00:04:20.170 END TEST guess_driver 00:04:20.170 ************************************ 00:04:20.170 00:04:20.170 real 0m13.371s 00:04:20.170 user 0m4.255s 00:04:20.170 sys 0m7.500s 00:04:20.170 22:46:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.170 22:46:47 -- common/autotest_common.sh@10 -- # set +x 00:04:20.170 ************************************ 00:04:20.170 END TEST driver 00:04:20.170 ************************************ 00:04:20.170 22:46:47 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:20.170 22:46:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.170 22:46:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.170 22:46:47 -- common/autotest_common.sh@10 -- # set +x 00:04:20.170 ************************************ 00:04:20.170 START TEST devices 00:04:20.170 ************************************ 00:04:20.170 22:46:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:20.170 * Looking for test storage... 00:04:20.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:20.170 22:46:47 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:20.170 22:46:47 -- setup/devices.sh@192 -- # setup reset 00:04:20.170 22:46:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.170 22:46:47 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.478 22:46:51 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:23.478 22:46:51 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:23.478 22:46:51 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:23.478 22:46:51 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:23.478 22:46:51 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:23.478 22:46:51 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:23.478 22:46:51 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:23.478 22:46:51 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.478 22:46:51 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:23.478 22:46:51 -- setup/devices.sh@196 -- # blocks=() 00:04:23.478 22:46:51 -- setup/devices.sh@196 -- # declare -a blocks 00:04:23.478 22:46:51 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:23.478 22:46:51 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:23.478 22:46:51 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:23.478 22:46:51 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.478 22:46:51 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:23.478 22:46:51 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:23.478 22:46:51 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:23.478 22:46:51 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:23.478 22:46:51 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:23.478 22:46:51 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:23.478 22:46:51 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:23.478 No valid GPT data, bailing 00:04:23.478 22:46:51 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.478 22:46:51 -- scripts/common.sh@393 -- # pt= 00:04:23.478 22:46:51 -- scripts/common.sh@394 -- # return 1 00:04:23.478 22:46:51 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:23.478 22:46:51 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:23.478 22:46:51 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:23.478 22:46:51 -- setup/common.sh@80 -- # echo 1920383410176 00:04:23.478 22:46:51 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:23.478 22:46:51 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.478 22:46:51 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:23.478 22:46:51 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:23.478 22:46:51 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:23.478 22:46:51 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:23.478 22:46:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:23.478 22:46:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:23.478 22:46:51 -- common/autotest_common.sh@10 -- # set +x 00:04:23.478 ************************************ 00:04:23.478 START TEST nvme_mount 00:04:23.478 ************************************ 00:04:23.478 22:46:51 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:23.478 22:46:51 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:23.478 22:46:51 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:23.478 22:46:51 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.478 22:46:51 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.478 22:46:51 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:23.478 22:46:51 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:23.478 22:46:51 -- setup/common.sh@40 -- # local part_no=1 00:04:23.478 22:46:51 -- setup/common.sh@41 -- # local size=1073741824 00:04:23.478 22:46:51 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:23.478 22:46:51 -- setup/common.sh@44 -- # parts=() 00:04:23.478 22:46:51 -- setup/common.sh@44 -- # local parts 00:04:23.478 22:46:51 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:23.478 22:46:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.478 22:46:51 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.478 22:46:51 -- setup/common.sh@46 -- # (( part++ )) 00:04:23.478 22:46:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.478 22:46:51 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:23.478 22:46:51 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:23.478 22:46:51 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:24.438 Creating new GPT entries in memory. 00:04:24.438 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:24.438 other utilities. 00:04:24.438 22:46:52 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:24.438 22:46:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.438 22:46:52 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.438 22:46:52 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.438 22:46:52 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:25.383 Creating new GPT entries in memory. 00:04:25.383 The operation has completed successfully. 00:04:25.383 22:46:53 -- setup/common.sh@57 -- # (( part++ )) 00:04:25.383 22:46:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.383 22:46:53 -- setup/common.sh@62 -- # wait 3860558 00:04:25.645 22:46:53 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.645 22:46:53 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:25.645 22:46:53 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.645 22:46:53 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:25.645 22:46:53 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:25.645 22:46:53 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.645 22:46:53 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.645 22:46:53 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:25.645 22:46:53 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:25.645 22:46:53 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.645 22:46:53 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.645 22:46:53 -- setup/devices.sh@53 -- # local found=0 00:04:25.645 22:46:53 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.645 22:46:53 -- setup/devices.sh@56 -- # : 00:04:25.645 22:46:53 -- setup/devices.sh@59 -- # local pci status 00:04:25.645 22:46:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.645 22:46:53 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:25.645 22:46:53 -- setup/devices.sh@47 -- # setup output config 00:04:25.645 22:46:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.645 22:46:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:28.952 22:46:56 -- setup/devices.sh@63 -- # found=1 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:28.952 22:46:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.952 22:46:56 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.952 22:46:56 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:28.952 22:46:56 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.952 22:46:56 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.952 22:46:56 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.952 22:46:56 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:28.952 22:46:56 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.952 22:46:56 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.952 22:46:57 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.952 22:46:57 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:28.952 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.952 22:46:57 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.952 22:46:57 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:29.214 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:29.214 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:29.214 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:29.214 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:29.214 22:46:57 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:29.214 22:46:57 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:29.214 22:46:57 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.214 22:46:57 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:29.214 22:46:57 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:29.214 22:46:57 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.214 22:46:57 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.214 22:46:57 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:29.214 22:46:57 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:29.214 22:46:57 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.214 22:46:57 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.214 22:46:57 -- setup/devices.sh@53 -- # local found=0 00:04:29.214 22:46:57 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.214 22:46:57 -- setup/devices.sh@56 -- # : 00:04:29.214 22:46:57 -- setup/devices.sh@59 -- # local pci status 00:04:29.214 22:46:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.214 22:46:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:29.214 22:46:57 -- setup/devices.sh@47 -- # setup output config 00:04:29.214 22:46:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.214 22:46:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:32.601 22:47:00 -- setup/devices.sh@63 -- # found=1 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.601 22:47:00 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:32.601 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.862 22:47:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.862 22:47:00 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:32.863 22:47:00 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.863 22:47:00 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:32.863 22:47:00 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:32.863 22:47:00 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:32.863 22:47:00 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:32.863 22:47:00 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:32.863 22:47:00 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:32.863 22:47:00 -- setup/devices.sh@50 -- # local mount_point= 00:04:32.863 22:47:00 -- setup/devices.sh@51 -- # local test_file= 00:04:32.863 22:47:00 -- setup/devices.sh@53 -- # local found=0 00:04:32.863 22:47:00 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:32.863 22:47:00 -- setup/devices.sh@59 -- # local pci status 00:04:32.863 22:47:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.863 22:47:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:32.863 22:47:00 -- setup/devices.sh@47 -- # setup output config 00:04:32.863 22:47:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.863 22:47:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:36.162 22:47:04 -- setup/devices.sh@63 -- # found=1 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.162 22:47:04 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:36.162 22:47:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.422 22:47:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.422 22:47:04 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:36.422 22:47:04 -- setup/devices.sh@68 -- # return 0 00:04:36.422 22:47:04 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:36.422 22:47:04 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.422 22:47:04 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.422 22:47:04 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:36.422 22:47:04 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:36.422 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:36.422 00:04:36.422 real 0m13.022s 00:04:36.422 user 0m4.046s 00:04:36.422 sys 0m6.803s 00:04:36.422 22:47:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.422 22:47:04 -- common/autotest_common.sh@10 -- # set +x 00:04:36.422 ************************************ 00:04:36.422 END TEST nvme_mount 00:04:36.422 ************************************ 00:04:36.422 22:47:04 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:36.422 22:47:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:36.422 22:47:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:36.422 22:47:04 -- common/autotest_common.sh@10 -- # set +x 00:04:36.422 ************************************ 00:04:36.422 START TEST dm_mount 00:04:36.422 ************************************ 00:04:36.422 22:47:04 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:36.422 22:47:04 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:36.423 22:47:04 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:36.423 22:47:04 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:36.423 22:47:04 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:36.423 22:47:04 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:36.423 22:47:04 -- setup/common.sh@40 -- # local part_no=2 00:04:36.423 22:47:04 -- setup/common.sh@41 -- # local size=1073741824 00:04:36.423 22:47:04 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:36.423 22:47:04 -- setup/common.sh@44 -- # parts=() 00:04:36.423 22:47:04 -- setup/common.sh@44 -- # local parts 00:04:36.423 22:47:04 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:36.423 22:47:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.423 22:47:04 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:36.423 22:47:04 -- setup/common.sh@46 -- # (( part++ )) 00:04:36.423 22:47:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.423 22:47:04 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:36.423 22:47:04 -- setup/common.sh@46 -- # (( part++ )) 00:04:36.423 22:47:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.423 22:47:04 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:36.423 22:47:04 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:36.423 22:47:04 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:37.805 Creating new GPT entries in memory. 00:04:37.805 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:37.805 other utilities. 00:04:37.805 22:47:05 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:37.805 22:47:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:37.805 22:47:05 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:37.805 22:47:05 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:37.805 22:47:05 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:38.747 Creating new GPT entries in memory. 00:04:38.747 The operation has completed successfully. 00:04:38.747 22:47:06 -- setup/common.sh@57 -- # (( part++ )) 00:04:38.747 22:47:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.747 22:47:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.747 22:47:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.747 22:47:06 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:39.693 The operation has completed successfully. 00:04:39.693 22:47:07 -- setup/common.sh@57 -- # (( part++ )) 00:04:39.693 22:47:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.693 22:47:07 -- setup/common.sh@62 -- # wait 3865817 00:04:39.693 22:47:07 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:39.693 22:47:07 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:39.693 22:47:07 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:39.693 22:47:07 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:39.693 22:47:07 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:39.693 22:47:07 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.693 22:47:07 -- setup/devices.sh@161 -- # break 00:04:39.693 22:47:07 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.693 22:47:07 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:39.693 22:47:07 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:39.693 22:47:07 -- setup/devices.sh@166 -- # dm=dm-0 00:04:39.693 22:47:07 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:39.693 22:47:07 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:39.693 22:47:07 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:39.693 22:47:07 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:39.693 22:47:07 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:39.693 22:47:07 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.693 22:47:07 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:39.693 22:47:07 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:39.693 22:47:07 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:39.693 22:47:07 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:39.693 22:47:07 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:39.693 22:47:07 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:39.693 22:47:07 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:39.693 22:47:07 -- setup/devices.sh@53 -- # local found=0 00:04:39.693 22:47:07 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:39.693 22:47:07 -- setup/devices.sh@56 -- # : 00:04:39.693 22:47:07 -- setup/devices.sh@59 -- # local pci status 00:04:39.693 22:47:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.693 22:47:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:39.693 22:47:07 -- setup/devices.sh@47 -- # setup output config 00:04:39.693 22:47:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.693 22:47:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:43.000 22:47:10 -- setup/devices.sh@63 -- # found=1 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.000 22:47:10 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:43.000 22:47:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.260 22:47:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.260 22:47:11 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:43.260 22:47:11 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.260 22:47:11 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:43.260 22:47:11 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.260 22:47:11 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:43.261 22:47:11 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:43.261 22:47:11 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:43.261 22:47:11 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:43.261 22:47:11 -- setup/devices.sh@50 -- # local mount_point= 00:04:43.261 22:47:11 -- setup/devices.sh@51 -- # local test_file= 00:04:43.261 22:47:11 -- setup/devices.sh@53 -- # local found=0 00:04:43.261 22:47:11 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:43.261 22:47:11 -- setup/devices.sh@59 -- # local pci status 00:04:43.261 22:47:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.261 22:47:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:43.261 22:47:11 -- setup/devices.sh@47 -- # setup output config 00:04:43.261 22:47:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.261 22:47:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:46.566 22:47:14 -- setup/devices.sh@63 -- # found=1 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.566 22:47:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.566 22:47:14 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:46.566 22:47:14 -- setup/devices.sh@68 -- # return 0 00:04:46.566 22:47:14 -- setup/devices.sh@187 -- # cleanup_dm 00:04:46.566 22:47:14 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.566 22:47:14 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:46.566 22:47:14 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:46.566 22:47:14 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:46.566 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:46.566 22:47:14 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:46.566 00:04:46.566 real 0m10.111s 00:04:46.566 user 0m2.602s 00:04:46.566 sys 0m4.517s 00:04:46.566 22:47:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.566 22:47:14 -- common/autotest_common.sh@10 -- # set +x 00:04:46.566 ************************************ 00:04:46.566 END TEST dm_mount 00:04:46.566 ************************************ 00:04:46.566 22:47:14 -- setup/devices.sh@1 -- # cleanup 00:04:46.566 22:47:14 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:46.566 22:47:14 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.566 22:47:14 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.566 22:47:14 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:46.828 22:47:14 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.828 22:47:14 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:46.828 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:46.828 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:46.828 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:46.828 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:46.828 22:47:15 -- setup/devices.sh@12 -- # cleanup_dm 00:04:46.828 22:47:15 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.090 22:47:15 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:47.090 22:47:15 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.090 22:47:15 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:47.090 22:47:15 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.090 22:47:15 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:47.090 00:04:47.090 real 0m27.395s 00:04:47.090 user 0m8.150s 00:04:47.090 sys 0m13.938s 00:04:47.090 22:47:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.090 22:47:15 -- common/autotest_common.sh@10 -- # set +x 00:04:47.090 ************************************ 00:04:47.090 END TEST devices 00:04:47.090 ************************************ 00:04:47.090 00:04:47.090 real 1m32.226s 00:04:47.090 user 0m30.937s 00:04:47.090 sys 0m52.353s 00:04:47.090 22:47:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.090 22:47:15 -- common/autotest_common.sh@10 -- # set +x 00:04:47.090 ************************************ 00:04:47.090 END TEST setup.sh 00:04:47.090 ************************************ 00:04:47.090 22:47:15 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:50.400 Hugepages 00:04:50.400 node hugesize free / total 00:04:50.400 node0 1048576kB 0 / 0 00:04:50.400 node0 2048kB 2048 / 2048 00:04:50.400 node1 1048576kB 0 / 0 00:04:50.400 node1 2048kB 0 / 0 00:04:50.400 00:04:50.400 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:50.400 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:50.400 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:50.400 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:50.400 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:50.400 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:50.400 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:50.400 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:50.400 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:50.400 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:50.400 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:50.400 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:50.400 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:50.400 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:50.400 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:50.400 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:50.400 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:50.400 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:50.400 22:47:18 -- spdk/autotest.sh@141 -- # uname -s 00:04:50.400 22:47:18 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:50.400 22:47:18 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:50.400 22:47:18 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:53.759 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:53.759 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:53.759 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:53.759 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:53.759 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:53.759 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:53.759 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:53.759 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:53.759 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:54.020 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:54.020 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:54.020 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:54.020 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:54.020 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:54.020 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:54.020 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:55.938 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:55.938 22:47:24 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:57.325 22:47:25 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:57.325 22:47:25 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:57.325 22:47:25 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:57.325 22:47:25 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:57.325 22:47:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:57.325 22:47:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:57.325 22:47:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.325 22:47:25 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:57.325 22:47:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:57.325 22:47:25 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:57.325 22:47:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:57.325 22:47:25 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:00.628 Waiting for block devices as requested 00:05:00.628 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:00.628 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:00.628 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:00.628 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:00.889 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:00.890 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:00.890 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:01.150 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:01.151 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:01.411 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:01.411 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:01.411 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:01.411 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:01.672 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:01.672 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:01.672 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:01.672 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:01.933 22:47:30 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:01.933 22:47:30 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:01.933 22:47:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:01.933 22:47:30 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:05:01.933 22:47:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:01.933 22:47:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:01.933 22:47:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:01.933 22:47:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:01.933 22:47:30 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:01.933 22:47:30 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:01.933 22:47:30 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:01.933 22:47:30 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:01.933 22:47:30 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:01.933 22:47:30 -- common/autotest_common.sh@1530 -- # oacs=' 0x5f' 00:05:01.933 22:47:30 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:01.933 22:47:30 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:01.933 22:47:30 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:01.933 22:47:30 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:01.934 22:47:30 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:02.194 22:47:30 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:02.195 22:47:30 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:02.195 22:47:30 -- common/autotest_common.sh@1542 -- # continue 00:05:02.195 22:47:30 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:02.195 22:47:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:02.195 22:47:30 -- common/autotest_common.sh@10 -- # set +x 00:05:02.195 22:47:30 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:02.195 22:47:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:02.195 22:47:30 -- common/autotest_common.sh@10 -- # set +x 00:05:02.195 22:47:30 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:05.507 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:05.507 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:05.769 22:47:33 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:05.769 22:47:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:05.769 22:47:33 -- common/autotest_common.sh@10 -- # set +x 00:05:05.769 22:47:33 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:05.769 22:47:33 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:05.769 22:47:33 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:05.769 22:47:33 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:05.769 22:47:33 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:05.769 22:47:33 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:05.769 22:47:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:05.769 22:47:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:05.769 22:47:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.769 22:47:33 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:05.769 22:47:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:06.031 22:47:33 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:06.031 22:47:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:06.031 22:47:33 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:06.031 22:47:33 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:06.031 22:47:33 -- common/autotest_common.sh@1565 -- # device=0xa80a 00:05:06.031 22:47:33 -- common/autotest_common.sh@1566 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:06.031 22:47:33 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:06.031 22:47:33 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:06.031 22:47:33 -- common/autotest_common.sh@1578 -- # return 0 00:05:06.031 22:47:33 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:06.031 22:47:33 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:06.031 22:47:33 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:06.031 22:47:33 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:06.031 22:47:33 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:06.031 22:47:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:06.031 22:47:33 -- common/autotest_common.sh@10 -- # set +x 00:05:06.031 22:47:33 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:06.031 22:47:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.031 22:47:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.031 22:47:33 -- common/autotest_common.sh@10 -- # set +x 00:05:06.031 ************************************ 00:05:06.031 START TEST env 00:05:06.031 ************************************ 00:05:06.031 22:47:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:06.031 * Looking for test storage... 00:05:06.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:06.031 22:47:34 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:06.031 22:47:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.031 22:47:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.031 22:47:34 -- common/autotest_common.sh@10 -- # set +x 00:05:06.031 ************************************ 00:05:06.031 START TEST env_memory 00:05:06.031 ************************************ 00:05:06.031 22:47:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:06.031 00:05:06.031 00:05:06.031 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.031 http://cunit.sourceforge.net/ 00:05:06.031 00:05:06.031 00:05:06.031 Suite: memory 00:05:06.031 Test: alloc and free memory map ...[2024-06-09 22:47:34.161718] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:06.031 passed 00:05:06.031 Test: mem map translation ...[2024-06-09 22:47:34.189004] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:06.031 [2024-06-09 22:47:34.189036] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:06.031 [2024-06-09 22:47:34.189085] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:06.031 [2024-06-09 22:47:34.189092] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:06.294 passed 00:05:06.294 Test: mem map registration ...[2024-06-09 22:47:34.246866] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:06.294 [2024-06-09 22:47:34.246888] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:06.294 passed 00:05:06.294 Test: mem map adjacent registrations ...passed 00:05:06.294 00:05:06.294 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.294 suites 1 1 n/a 0 0 00:05:06.294 tests 4 4 4 0 0 00:05:06.294 asserts 152 152 152 0 n/a 00:05:06.294 00:05:06.294 Elapsed time = 0.201 seconds 00:05:06.294 00:05:06.294 real 0m0.215s 00:05:06.294 user 0m0.202s 00:05:06.294 sys 0m0.012s 00:05:06.294 22:47:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.294 22:47:34 -- common/autotest_common.sh@10 -- # set +x 00:05:06.294 ************************************ 00:05:06.294 END TEST env_memory 00:05:06.294 ************************************ 00:05:06.294 22:47:34 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:06.294 22:47:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.294 22:47:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.294 22:47:34 -- common/autotest_common.sh@10 -- # set +x 00:05:06.294 ************************************ 00:05:06.294 START TEST env_vtophys 00:05:06.294 ************************************ 00:05:06.294 22:47:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:06.294 EAL: lib.eal log level changed from notice to debug 00:05:06.294 EAL: Detected lcore 0 as core 0 on socket 0 00:05:06.294 EAL: Detected lcore 1 as core 1 on socket 0 00:05:06.295 EAL: Detected lcore 2 as core 2 on socket 0 00:05:06.295 EAL: Detected lcore 3 as core 3 on socket 0 00:05:06.295 EAL: Detected lcore 4 as core 4 on socket 0 00:05:06.295 EAL: Detected lcore 5 as core 5 on socket 0 00:05:06.295 EAL: Detected lcore 6 as core 6 on socket 0 00:05:06.295 EAL: Detected lcore 7 as core 7 on socket 0 00:05:06.295 EAL: Detected lcore 8 as core 8 on socket 0 00:05:06.295 EAL: Detected lcore 9 as core 9 on socket 0 00:05:06.295 EAL: Detected lcore 10 as core 10 on socket 0 00:05:06.295 EAL: Detected lcore 11 as core 11 on socket 0 00:05:06.295 EAL: Detected lcore 12 as core 12 on socket 0 00:05:06.295 EAL: Detected lcore 13 as core 13 on socket 0 00:05:06.295 EAL: Detected lcore 14 as core 14 on socket 0 00:05:06.295 EAL: Detected lcore 15 as core 15 on socket 0 00:05:06.295 EAL: Detected lcore 16 as core 16 on socket 0 00:05:06.295 EAL: Detected lcore 17 as core 17 on socket 0 00:05:06.295 EAL: Detected lcore 18 as core 18 on socket 0 00:05:06.295 EAL: Detected lcore 19 as core 19 on socket 0 00:05:06.295 EAL: Detected lcore 20 as core 20 on socket 0 00:05:06.295 EAL: Detected lcore 21 as core 21 on socket 0 00:05:06.295 EAL: Detected lcore 22 as core 22 on socket 0 00:05:06.295 EAL: Detected lcore 23 as core 23 on socket 0 00:05:06.295 EAL: Detected lcore 24 as core 24 on socket 0 00:05:06.295 EAL: Detected lcore 25 as core 25 on socket 0 00:05:06.295 EAL: Detected lcore 26 as core 26 on socket 0 00:05:06.295 EAL: Detected lcore 27 as core 27 on socket 0 00:05:06.295 EAL: Detected lcore 28 as core 28 on socket 0 00:05:06.295 EAL: Detected lcore 29 as core 29 on socket 0 00:05:06.295 EAL: Detected lcore 30 as core 30 on socket 0 00:05:06.295 EAL: Detected lcore 31 as core 31 on socket 0 00:05:06.295 EAL: Detected lcore 32 as core 32 on socket 0 00:05:06.295 EAL: Detected lcore 33 as core 33 on socket 0 00:05:06.295 EAL: Detected lcore 34 as core 34 on socket 0 00:05:06.295 EAL: Detected lcore 35 as core 35 on socket 0 00:05:06.295 EAL: Detected lcore 36 as core 0 on socket 1 00:05:06.295 EAL: Detected lcore 37 as core 1 on socket 1 00:05:06.295 EAL: Detected lcore 38 as core 2 on socket 1 00:05:06.295 EAL: Detected lcore 39 as core 3 on socket 1 00:05:06.295 EAL: Detected lcore 40 as core 4 on socket 1 00:05:06.295 EAL: Detected lcore 41 as core 5 on socket 1 00:05:06.295 EAL: Detected lcore 42 as core 6 on socket 1 00:05:06.295 EAL: Detected lcore 43 as core 7 on socket 1 00:05:06.295 EAL: Detected lcore 44 as core 8 on socket 1 00:05:06.295 EAL: Detected lcore 45 as core 9 on socket 1 00:05:06.295 EAL: Detected lcore 46 as core 10 on socket 1 00:05:06.295 EAL: Detected lcore 47 as core 11 on socket 1 00:05:06.295 EAL: Detected lcore 48 as core 12 on socket 1 00:05:06.295 EAL: Detected lcore 49 as core 13 on socket 1 00:05:06.295 EAL: Detected lcore 50 as core 14 on socket 1 00:05:06.295 EAL: Detected lcore 51 as core 15 on socket 1 00:05:06.295 EAL: Detected lcore 52 as core 16 on socket 1 00:05:06.295 EAL: Detected lcore 53 as core 17 on socket 1 00:05:06.295 EAL: Detected lcore 54 as core 18 on socket 1 00:05:06.295 EAL: Detected lcore 55 as core 19 on socket 1 00:05:06.295 EAL: Detected lcore 56 as core 20 on socket 1 00:05:06.295 EAL: Detected lcore 57 as core 21 on socket 1 00:05:06.295 EAL: Detected lcore 58 as core 22 on socket 1 00:05:06.295 EAL: Detected lcore 59 as core 23 on socket 1 00:05:06.295 EAL: Detected lcore 60 as core 24 on socket 1 00:05:06.295 EAL: Detected lcore 61 as core 25 on socket 1 00:05:06.295 EAL: Detected lcore 62 as core 26 on socket 1 00:05:06.295 EAL: Detected lcore 63 as core 27 on socket 1 00:05:06.295 EAL: Detected lcore 64 as core 28 on socket 1 00:05:06.295 EAL: Detected lcore 65 as core 29 on socket 1 00:05:06.295 EAL: Detected lcore 66 as core 30 on socket 1 00:05:06.295 EAL: Detected lcore 67 as core 31 on socket 1 00:05:06.295 EAL: Detected lcore 68 as core 32 on socket 1 00:05:06.295 EAL: Detected lcore 69 as core 33 on socket 1 00:05:06.295 EAL: Detected lcore 70 as core 34 on socket 1 00:05:06.295 EAL: Detected lcore 71 as core 35 on socket 1 00:05:06.295 EAL: Detected lcore 72 as core 0 on socket 0 00:05:06.295 EAL: Detected lcore 73 as core 1 on socket 0 00:05:06.295 EAL: Detected lcore 74 as core 2 on socket 0 00:05:06.295 EAL: Detected lcore 75 as core 3 on socket 0 00:05:06.295 EAL: Detected lcore 76 as core 4 on socket 0 00:05:06.295 EAL: Detected lcore 77 as core 5 on socket 0 00:05:06.295 EAL: Detected lcore 78 as core 6 on socket 0 00:05:06.295 EAL: Detected lcore 79 as core 7 on socket 0 00:05:06.295 EAL: Detected lcore 80 as core 8 on socket 0 00:05:06.295 EAL: Detected lcore 81 as core 9 on socket 0 00:05:06.295 EAL: Detected lcore 82 as core 10 on socket 0 00:05:06.295 EAL: Detected lcore 83 as core 11 on socket 0 00:05:06.295 EAL: Detected lcore 84 as core 12 on socket 0 00:05:06.295 EAL: Detected lcore 85 as core 13 on socket 0 00:05:06.295 EAL: Detected lcore 86 as core 14 on socket 0 00:05:06.295 EAL: Detected lcore 87 as core 15 on socket 0 00:05:06.295 EAL: Detected lcore 88 as core 16 on socket 0 00:05:06.295 EAL: Detected lcore 89 as core 17 on socket 0 00:05:06.295 EAL: Detected lcore 90 as core 18 on socket 0 00:05:06.295 EAL: Detected lcore 91 as core 19 on socket 0 00:05:06.295 EAL: Detected lcore 92 as core 20 on socket 0 00:05:06.295 EAL: Detected lcore 93 as core 21 on socket 0 00:05:06.295 EAL: Detected lcore 94 as core 22 on socket 0 00:05:06.295 EAL: Detected lcore 95 as core 23 on socket 0 00:05:06.295 EAL: Detected lcore 96 as core 24 on socket 0 00:05:06.295 EAL: Detected lcore 97 as core 25 on socket 0 00:05:06.295 EAL: Detected lcore 98 as core 26 on socket 0 00:05:06.295 EAL: Detected lcore 99 as core 27 on socket 0 00:05:06.295 EAL: Detected lcore 100 as core 28 on socket 0 00:05:06.295 EAL: Detected lcore 101 as core 29 on socket 0 00:05:06.295 EAL: Detected lcore 102 as core 30 on socket 0 00:05:06.295 EAL: Detected lcore 103 as core 31 on socket 0 00:05:06.295 EAL: Detected lcore 104 as core 32 on socket 0 00:05:06.295 EAL: Detected lcore 105 as core 33 on socket 0 00:05:06.295 EAL: Detected lcore 106 as core 34 on socket 0 00:05:06.295 EAL: Detected lcore 107 as core 35 on socket 0 00:05:06.295 EAL: Detected lcore 108 as core 0 on socket 1 00:05:06.295 EAL: Detected lcore 109 as core 1 on socket 1 00:05:06.295 EAL: Detected lcore 110 as core 2 on socket 1 00:05:06.295 EAL: Detected lcore 111 as core 3 on socket 1 00:05:06.295 EAL: Detected lcore 112 as core 4 on socket 1 00:05:06.295 EAL: Detected lcore 113 as core 5 on socket 1 00:05:06.295 EAL: Detected lcore 114 as core 6 on socket 1 00:05:06.295 EAL: Detected lcore 115 as core 7 on socket 1 00:05:06.295 EAL: Detected lcore 116 as core 8 on socket 1 00:05:06.295 EAL: Detected lcore 117 as core 9 on socket 1 00:05:06.295 EAL: Detected lcore 118 as core 10 on socket 1 00:05:06.295 EAL: Detected lcore 119 as core 11 on socket 1 00:05:06.295 EAL: Detected lcore 120 as core 12 on socket 1 00:05:06.295 EAL: Detected lcore 121 as core 13 on socket 1 00:05:06.295 EAL: Detected lcore 122 as core 14 on socket 1 00:05:06.295 EAL: Detected lcore 123 as core 15 on socket 1 00:05:06.295 EAL: Detected lcore 124 as core 16 on socket 1 00:05:06.295 EAL: Detected lcore 125 as core 17 on socket 1 00:05:06.295 EAL: Detected lcore 126 as core 18 on socket 1 00:05:06.295 EAL: Detected lcore 127 as core 19 on socket 1 00:05:06.295 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:06.295 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:06.295 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:06.295 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:06.295 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:06.295 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:06.295 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:06.295 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:06.295 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:06.295 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:06.295 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:06.295 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:06.295 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:06.295 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:06.295 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:06.295 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:06.295 EAL: Maximum logical cores by configuration: 128 00:05:06.295 EAL: Detected CPU lcores: 128 00:05:06.295 EAL: Detected NUMA nodes: 2 00:05:06.295 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:06.295 EAL: Detected shared linkage of DPDK 00:05:06.295 EAL: No shared files mode enabled, IPC will be disabled 00:05:06.295 EAL: Bus pci wants IOVA as 'DC' 00:05:06.295 EAL: Buses did not request a specific IOVA mode. 00:05:06.295 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:06.295 EAL: Selected IOVA mode 'VA' 00:05:06.295 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.295 EAL: Probing VFIO support... 00:05:06.295 EAL: IOMMU type 1 (Type 1) is supported 00:05:06.295 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:06.295 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:06.295 EAL: VFIO support initialized 00:05:06.295 EAL: Ask a virtual area of 0x2e000 bytes 00:05:06.295 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:06.295 EAL: Setting up physically contiguous memory... 00:05:06.295 EAL: Setting maximum number of open files to 524288 00:05:06.295 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:06.295 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:06.295 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:06.295 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.295 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:06.295 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.295 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.295 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:06.295 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:06.296 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.296 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:06.296 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.296 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.296 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:06.296 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:06.296 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.296 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:06.296 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.296 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.296 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:06.296 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:06.296 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.296 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:06.296 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.296 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.296 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:06.296 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:06.296 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:06.296 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.296 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:06.296 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:06.296 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.296 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:06.296 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:06.296 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.296 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:06.296 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:06.296 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.296 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:06.296 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:06.296 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.296 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:06.296 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:06.296 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.296 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:06.296 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:06.296 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.296 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:06.296 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:06.296 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.296 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:06.296 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:06.296 EAL: Hugepages will be freed exactly as allocated. 00:05:06.296 EAL: No shared files mode enabled, IPC is disabled 00:05:06.296 EAL: No shared files mode enabled, IPC is disabled 00:05:06.296 EAL: TSC frequency is ~2400000 KHz 00:05:06.296 EAL: Main lcore 0 is ready (tid=7f5bd8fc5a00;cpuset=[0]) 00:05:06.296 EAL: Trying to obtain current memory policy. 00:05:06.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.296 EAL: Restoring previous memory policy: 0 00:05:06.296 EAL: request: mp_malloc_sync 00:05:06.296 EAL: No shared files mode enabled, IPC is disabled 00:05:06.296 EAL: Heap on socket 0 was expanded by 2MB 00:05:06.296 EAL: No shared files mode enabled, IPC is disabled 00:05:06.296 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:06.296 EAL: Mem event callback 'spdk:(nil)' registered 00:05:06.296 00:05:06.296 00:05:06.296 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.296 http://cunit.sourceforge.net/ 00:05:06.296 00:05:06.296 00:05:06.296 Suite: components_suite 00:05:06.296 Test: vtophys_malloc_test ...passed 00:05:06.296 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:06.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.296 EAL: Restoring previous memory policy: 4 00:05:06.296 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.296 EAL: request: mp_malloc_sync 00:05:06.296 EAL: No shared files mode enabled, IPC is disabled 00:05:06.296 EAL: Heap on socket 0 was expanded by 4MB 00:05:06.296 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.296 EAL: request: mp_malloc_sync 00:05:06.296 EAL: No shared files mode enabled, IPC is disabled 00:05:06.296 EAL: Heap on socket 0 was shrunk by 4MB 00:05:06.296 EAL: Trying to obtain current memory policy. 00:05:06.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.296 EAL: Restoring previous memory policy: 4 00:05:06.296 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.296 EAL: request: mp_malloc_sync 00:05:06.296 EAL: No shared files mode enabled, IPC is disabled 00:05:06.296 EAL: Heap on socket 0 was expanded by 6MB 00:05:06.296 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.296 EAL: request: mp_malloc_sync 00:05:06.296 EAL: No shared files mode enabled, IPC is disabled 00:05:06.296 EAL: Heap on socket 0 was shrunk by 6MB 00:05:06.296 EAL: Trying to obtain current memory policy. 00:05:06.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.296 EAL: Restoring previous memory policy: 4 00:05:06.296 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.296 EAL: request: mp_malloc_sync 00:05:06.296 EAL: No shared files mode enabled, IPC is disabled 00:05:06.296 EAL: Heap on socket 0 was expanded by 10MB 00:05:06.296 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.296 EAL: request: mp_malloc_sync 00:05:06.296 EAL: No shared files mode enabled, IPC is disabled 00:05:06.296 EAL: Heap on socket 0 was shrunk by 10MB 00:05:06.296 EAL: Trying to obtain current memory policy. 00:05:06.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.296 EAL: Restoring previous memory policy: 4 00:05:06.296 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.296 EAL: request: mp_malloc_sync 00:05:06.296 EAL: No shared files mode enabled, IPC is disabled 00:05:06.296 EAL: Heap on socket 0 was expanded by 18MB 00:05:06.296 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.296 EAL: request: mp_malloc_sync 00:05:06.296 EAL: No shared files mode enabled, IPC is disabled 00:05:06.296 EAL: Heap on socket 0 was shrunk by 18MB 00:05:06.296 EAL: Trying to obtain current memory policy. 00:05:06.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.558 EAL: Restoring previous memory policy: 4 00:05:06.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.558 EAL: request: mp_malloc_sync 00:05:06.558 EAL: No shared files mode enabled, IPC is disabled 00:05:06.558 EAL: Heap on socket 0 was expanded by 34MB 00:05:06.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.558 EAL: request: mp_malloc_sync 00:05:06.558 EAL: No shared files mode enabled, IPC is disabled 00:05:06.558 EAL: Heap on socket 0 was shrunk by 34MB 00:05:06.558 EAL: Trying to obtain current memory policy. 00:05:06.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.558 EAL: Restoring previous memory policy: 4 00:05:06.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.558 EAL: request: mp_malloc_sync 00:05:06.558 EAL: No shared files mode enabled, IPC is disabled 00:05:06.558 EAL: Heap on socket 0 was expanded by 66MB 00:05:06.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.558 EAL: request: mp_malloc_sync 00:05:06.558 EAL: No shared files mode enabled, IPC is disabled 00:05:06.558 EAL: Heap on socket 0 was shrunk by 66MB 00:05:06.558 EAL: Trying to obtain current memory policy. 00:05:06.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.558 EAL: Restoring previous memory policy: 4 00:05:06.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.558 EAL: request: mp_malloc_sync 00:05:06.558 EAL: No shared files mode enabled, IPC is disabled 00:05:06.558 EAL: Heap on socket 0 was expanded by 130MB 00:05:06.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.558 EAL: request: mp_malloc_sync 00:05:06.558 EAL: No shared files mode enabled, IPC is disabled 00:05:06.558 EAL: Heap on socket 0 was shrunk by 130MB 00:05:06.558 EAL: Trying to obtain current memory policy. 00:05:06.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.558 EAL: Restoring previous memory policy: 4 00:05:06.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.558 EAL: request: mp_malloc_sync 00:05:06.558 EAL: No shared files mode enabled, IPC is disabled 00:05:06.558 EAL: Heap on socket 0 was expanded by 258MB 00:05:06.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.558 EAL: request: mp_malloc_sync 00:05:06.558 EAL: No shared files mode enabled, IPC is disabled 00:05:06.558 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.558 EAL: Trying to obtain current memory policy. 00:05:06.558 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.558 EAL: Restoring previous memory policy: 4 00:05:06.558 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.558 EAL: request: mp_malloc_sync 00:05:06.558 EAL: No shared files mode enabled, IPC is disabled 00:05:06.558 EAL: Heap on socket 0 was expanded by 514MB 00:05:06.820 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.820 EAL: request: mp_malloc_sync 00:05:06.820 EAL: No shared files mode enabled, IPC is disabled 00:05:06.820 EAL: Heap on socket 0 was shrunk by 514MB 00:05:06.820 EAL: Trying to obtain current memory policy. 00:05:06.820 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.820 EAL: Restoring previous memory policy: 4 00:05:06.820 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.820 EAL: request: mp_malloc_sync 00:05:06.820 EAL: No shared files mode enabled, IPC is disabled 00:05:06.820 EAL: Heap on socket 0 was expanded by 1026MB 00:05:07.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.082 EAL: request: mp_malloc_sync 00:05:07.082 EAL: No shared files mode enabled, IPC is disabled 00:05:07.082 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:07.082 passed 00:05:07.082 00:05:07.082 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.082 suites 1 1 n/a 0 0 00:05:07.082 tests 2 2 2 0 0 00:05:07.082 asserts 497 497 497 0 n/a 00:05:07.082 00:05:07.082 Elapsed time = 0.658 seconds 00:05:07.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.082 EAL: request: mp_malloc_sync 00:05:07.082 EAL: No shared files mode enabled, IPC is disabled 00:05:07.082 EAL: Heap on socket 0 was shrunk by 2MB 00:05:07.082 EAL: No shared files mode enabled, IPC is disabled 00:05:07.082 EAL: No shared files mode enabled, IPC is disabled 00:05:07.082 EAL: No shared files mode enabled, IPC is disabled 00:05:07.082 00:05:07.082 real 0m0.780s 00:05:07.082 user 0m0.415s 00:05:07.082 sys 0m0.337s 00:05:07.082 22:47:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.082 22:47:35 -- common/autotest_common.sh@10 -- # set +x 00:05:07.082 ************************************ 00:05:07.082 END TEST env_vtophys 00:05:07.082 ************************************ 00:05:07.082 22:47:35 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:07.082 22:47:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.082 22:47:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.082 22:47:35 -- common/autotest_common.sh@10 -- # set +x 00:05:07.082 ************************************ 00:05:07.082 START TEST env_pci 00:05:07.082 ************************************ 00:05:07.082 22:47:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:07.082 00:05:07.082 00:05:07.082 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.082 http://cunit.sourceforge.net/ 00:05:07.082 00:05:07.082 00:05:07.082 Suite: pci 00:05:07.082 Test: pci_hook ...[2024-06-09 22:47:35.207241] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3877001 has claimed it 00:05:07.082 EAL: Cannot find device (10000:00:01.0) 00:05:07.082 EAL: Failed to attach device on primary process 00:05:07.082 passed 00:05:07.082 00:05:07.082 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.082 suites 1 1 n/a 0 0 00:05:07.082 tests 1 1 1 0 0 00:05:07.082 asserts 25 25 25 0 n/a 00:05:07.082 00:05:07.082 Elapsed time = 0.033 seconds 00:05:07.082 00:05:07.082 real 0m0.054s 00:05:07.082 user 0m0.016s 00:05:07.082 sys 0m0.038s 00:05:07.082 22:47:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.082 22:47:35 -- common/autotest_common.sh@10 -- # set +x 00:05:07.082 ************************************ 00:05:07.082 END TEST env_pci 00:05:07.082 ************************************ 00:05:07.343 22:47:35 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:07.343 22:47:35 -- env/env.sh@15 -- # uname 00:05:07.343 22:47:35 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:07.344 22:47:35 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:07.344 22:47:35 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.344 22:47:35 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:07.344 22:47:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.344 22:47:35 -- common/autotest_common.sh@10 -- # set +x 00:05:07.344 ************************************ 00:05:07.344 START TEST env_dpdk_post_init 00:05:07.344 ************************************ 00:05:07.344 22:47:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:07.344 EAL: Detected CPU lcores: 128 00:05:07.344 EAL: Detected NUMA nodes: 2 00:05:07.344 EAL: Detected shared linkage of DPDK 00:05:07.344 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:07.344 EAL: Selected IOVA mode 'VA' 00:05:07.344 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.344 EAL: VFIO support initialized 00:05:07.344 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:07.344 EAL: Using IOMMU type 1 (Type 1) 00:05:07.344 EAL: Ignore mapping IO port bar(1) 00:05:07.605 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:07.605 EAL: Ignore mapping IO port bar(1) 00:05:07.866 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:07.866 EAL: Ignore mapping IO port bar(1) 00:05:08.128 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:08.128 EAL: Ignore mapping IO port bar(1) 00:05:08.128 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:08.390 EAL: Ignore mapping IO port bar(1) 00:05:08.390 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:08.651 EAL: Ignore mapping IO port bar(1) 00:05:08.651 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:08.913 EAL: Ignore mapping IO port bar(1) 00:05:08.913 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:08.913 EAL: Ignore mapping IO port bar(1) 00:05:09.175 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:09.436 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:09.436 EAL: Ignore mapping IO port bar(1) 00:05:09.697 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:09.697 EAL: Ignore mapping IO port bar(1) 00:05:09.697 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:09.959 EAL: Ignore mapping IO port bar(1) 00:05:09.959 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:10.220 EAL: Ignore mapping IO port bar(1) 00:05:10.220 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:10.481 EAL: Ignore mapping IO port bar(1) 00:05:10.481 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:10.481 EAL: Ignore mapping IO port bar(1) 00:05:10.742 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:10.742 EAL: Ignore mapping IO port bar(1) 00:05:11.025 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:11.025 EAL: Ignore mapping IO port bar(1) 00:05:11.303 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:11.304 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:11.304 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:11.304 Starting DPDK initialization... 00:05:11.304 Starting SPDK post initialization... 00:05:11.304 SPDK NVMe probe 00:05:11.304 Attaching to 0000:65:00.0 00:05:11.304 Attached to 0000:65:00.0 00:05:11.304 Cleaning up... 00:05:13.220 00:05:13.220 real 0m5.710s 00:05:13.220 user 0m0.183s 00:05:13.220 sys 0m0.069s 00:05:13.220 22:47:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.220 22:47:41 -- common/autotest_common.sh@10 -- # set +x 00:05:13.220 ************************************ 00:05:13.220 END TEST env_dpdk_post_init 00:05:13.220 ************************************ 00:05:13.220 22:47:41 -- env/env.sh@26 -- # uname 00:05:13.220 22:47:41 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:13.220 22:47:41 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:13.220 22:47:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.220 22:47:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.220 22:47:41 -- common/autotest_common.sh@10 -- # set +x 00:05:13.220 ************************************ 00:05:13.220 START TEST env_mem_callbacks 00:05:13.220 ************************************ 00:05:13.220 22:47:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:13.220 EAL: Detected CPU lcores: 128 00:05:13.220 EAL: Detected NUMA nodes: 2 00:05:13.220 EAL: Detected shared linkage of DPDK 00:05:13.220 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:13.220 EAL: Selected IOVA mode 'VA' 00:05:13.220 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.220 EAL: VFIO support initialized 00:05:13.220 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:13.220 00:05:13.220 00:05:13.220 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.220 http://cunit.sourceforge.net/ 00:05:13.220 00:05:13.220 00:05:13.220 Suite: memory 00:05:13.220 Test: test ... 00:05:13.220 register 0x200000200000 2097152 00:05:13.220 malloc 3145728 00:05:13.220 register 0x200000400000 4194304 00:05:13.220 buf 0x200000500000 len 3145728 PASSED 00:05:13.220 malloc 64 00:05:13.220 buf 0x2000004fff40 len 64 PASSED 00:05:13.220 malloc 4194304 00:05:13.220 register 0x200000800000 6291456 00:05:13.220 buf 0x200000a00000 len 4194304 PASSED 00:05:13.220 free 0x200000500000 3145728 00:05:13.220 free 0x2000004fff40 64 00:05:13.220 unregister 0x200000400000 4194304 PASSED 00:05:13.220 free 0x200000a00000 4194304 00:05:13.220 unregister 0x200000800000 6291456 PASSED 00:05:13.220 malloc 8388608 00:05:13.220 register 0x200000400000 10485760 00:05:13.220 buf 0x200000600000 len 8388608 PASSED 00:05:13.221 free 0x200000600000 8388608 00:05:13.221 unregister 0x200000400000 10485760 PASSED 00:05:13.221 passed 00:05:13.221 00:05:13.221 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.221 suites 1 1 n/a 0 0 00:05:13.221 tests 1 1 1 0 0 00:05:13.221 asserts 15 15 15 0 n/a 00:05:13.221 00:05:13.221 Elapsed time = 0.008 seconds 00:05:13.221 00:05:13.221 real 0m0.065s 00:05:13.221 user 0m0.023s 00:05:13.221 sys 0m0.041s 00:05:13.221 22:47:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.221 22:47:41 -- common/autotest_common.sh@10 -- # set +x 00:05:13.221 ************************************ 00:05:13.221 END TEST env_mem_callbacks 00:05:13.221 ************************************ 00:05:13.221 00:05:13.221 real 0m7.155s 00:05:13.221 user 0m0.951s 00:05:13.221 sys 0m0.757s 00:05:13.221 22:47:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.221 22:47:41 -- common/autotest_common.sh@10 -- # set +x 00:05:13.221 ************************************ 00:05:13.221 END TEST env 00:05:13.221 ************************************ 00:05:13.221 22:47:41 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:13.221 22:47:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.221 22:47:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.221 22:47:41 -- common/autotest_common.sh@10 -- # set +x 00:05:13.221 ************************************ 00:05:13.221 START TEST rpc 00:05:13.221 ************************************ 00:05:13.221 22:47:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:13.221 * Looking for test storage... 00:05:13.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:13.221 22:47:41 -- rpc/rpc.sh@65 -- # spdk_pid=3878429 00:05:13.221 22:47:41 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.221 22:47:41 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:13.221 22:47:41 -- rpc/rpc.sh@67 -- # waitforlisten 3878429 00:05:13.221 22:47:41 -- common/autotest_common.sh@819 -- # '[' -z 3878429 ']' 00:05:13.221 22:47:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.221 22:47:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:13.221 22:47:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.221 22:47:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:13.221 22:47:41 -- common/autotest_common.sh@10 -- # set +x 00:05:13.221 [2024-06-09 22:47:41.351826] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:13.221 [2024-06-09 22:47:41.351877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3878429 ] 00:05:13.221 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.482 [2024-06-09 22:47:41.410118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.482 [2024-06-09 22:47:41.472817] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:13.482 [2024-06-09 22:47:41.472934] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:13.482 [2024-06-09 22:47:41.472944] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3878429' to capture a snapshot of events at runtime. 00:05:13.482 [2024-06-09 22:47:41.472951] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3878429 for offline analysis/debug. 00:05:13.482 [2024-06-09 22:47:41.472976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.159 22:47:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:14.159 22:47:42 -- common/autotest_common.sh@852 -- # return 0 00:05:14.159 22:47:42 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:14.159 22:47:42 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:14.159 22:47:42 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:14.159 22:47:42 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:14.159 22:47:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.159 22:47:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.159 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.159 ************************************ 00:05:14.159 START TEST rpc_integrity 00:05:14.159 ************************************ 00:05:14.159 22:47:42 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:14.159 22:47:42 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.159 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.159 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.159 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.159 22:47:42 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.159 22:47:42 -- rpc/rpc.sh@13 -- # jq length 00:05:14.159 22:47:42 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.159 22:47:42 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.159 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.159 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.159 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.159 22:47:42 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:14.159 22:47:42 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.159 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.159 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.159 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.159 22:47:42 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.159 { 00:05:14.159 "name": "Malloc0", 00:05:14.159 "aliases": [ 00:05:14.159 "fc6fb569-cf71-44f4-8e81-ee70ca7a13c2" 00:05:14.159 ], 00:05:14.159 "product_name": "Malloc disk", 00:05:14.159 "block_size": 512, 00:05:14.159 "num_blocks": 16384, 00:05:14.159 "uuid": "fc6fb569-cf71-44f4-8e81-ee70ca7a13c2", 00:05:14.159 "assigned_rate_limits": { 00:05:14.159 "rw_ios_per_sec": 0, 00:05:14.159 "rw_mbytes_per_sec": 0, 00:05:14.159 "r_mbytes_per_sec": 0, 00:05:14.159 "w_mbytes_per_sec": 0 00:05:14.159 }, 00:05:14.159 "claimed": false, 00:05:14.159 "zoned": false, 00:05:14.159 "supported_io_types": { 00:05:14.159 "read": true, 00:05:14.159 "write": true, 00:05:14.159 "unmap": true, 00:05:14.159 "write_zeroes": true, 00:05:14.159 "flush": true, 00:05:14.159 "reset": true, 00:05:14.159 "compare": false, 00:05:14.159 "compare_and_write": false, 00:05:14.159 "abort": true, 00:05:14.159 "nvme_admin": false, 00:05:14.159 "nvme_io": false 00:05:14.159 }, 00:05:14.159 "memory_domains": [ 00:05:14.159 { 00:05:14.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.159 "dma_device_type": 2 00:05:14.159 } 00:05:14.159 ], 00:05:14.159 "driver_specific": {} 00:05:14.159 } 00:05:14.159 ]' 00:05:14.159 22:47:42 -- rpc/rpc.sh@17 -- # jq length 00:05:14.159 22:47:42 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.159 22:47:42 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:14.159 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.159 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.159 [2024-06-09 22:47:42.263812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:14.159 [2024-06-09 22:47:42.263847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.159 [2024-06-09 22:47:42.263859] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x219d470 00:05:14.159 [2024-06-09 22:47:42.263866] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.159 [2024-06-09 22:47:42.265202] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.159 [2024-06-09 22:47:42.265221] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.159 Passthru0 00:05:14.159 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.159 22:47:42 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.159 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.159 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.159 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.159 22:47:42 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.159 { 00:05:14.159 "name": "Malloc0", 00:05:14.159 "aliases": [ 00:05:14.159 "fc6fb569-cf71-44f4-8e81-ee70ca7a13c2" 00:05:14.159 ], 00:05:14.159 "product_name": "Malloc disk", 00:05:14.159 "block_size": 512, 00:05:14.159 "num_blocks": 16384, 00:05:14.159 "uuid": "fc6fb569-cf71-44f4-8e81-ee70ca7a13c2", 00:05:14.159 "assigned_rate_limits": { 00:05:14.159 "rw_ios_per_sec": 0, 00:05:14.159 "rw_mbytes_per_sec": 0, 00:05:14.159 "r_mbytes_per_sec": 0, 00:05:14.159 "w_mbytes_per_sec": 0 00:05:14.159 }, 00:05:14.159 "claimed": true, 00:05:14.159 "claim_type": "exclusive_write", 00:05:14.159 "zoned": false, 00:05:14.159 "supported_io_types": { 00:05:14.159 "read": true, 00:05:14.159 "write": true, 00:05:14.159 "unmap": true, 00:05:14.159 "write_zeroes": true, 00:05:14.159 "flush": true, 00:05:14.159 "reset": true, 00:05:14.159 "compare": false, 00:05:14.159 "compare_and_write": false, 00:05:14.159 "abort": true, 00:05:14.159 "nvme_admin": false, 00:05:14.159 "nvme_io": false 00:05:14.159 }, 00:05:14.159 "memory_domains": [ 00:05:14.159 { 00:05:14.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.159 "dma_device_type": 2 00:05:14.160 } 00:05:14.160 ], 00:05:14.160 "driver_specific": {} 00:05:14.160 }, 00:05:14.160 { 00:05:14.160 "name": "Passthru0", 00:05:14.160 "aliases": [ 00:05:14.160 "123270e9-c46d-5913-9491-c2e72976648f" 00:05:14.160 ], 00:05:14.160 "product_name": "passthru", 00:05:14.160 "block_size": 512, 00:05:14.160 "num_blocks": 16384, 00:05:14.160 "uuid": "123270e9-c46d-5913-9491-c2e72976648f", 00:05:14.160 "assigned_rate_limits": { 00:05:14.160 "rw_ios_per_sec": 0, 00:05:14.160 "rw_mbytes_per_sec": 0, 00:05:14.160 "r_mbytes_per_sec": 0, 00:05:14.160 "w_mbytes_per_sec": 0 00:05:14.160 }, 00:05:14.160 "claimed": false, 00:05:14.160 "zoned": false, 00:05:14.160 "supported_io_types": { 00:05:14.160 "read": true, 00:05:14.160 "write": true, 00:05:14.160 "unmap": true, 00:05:14.160 "write_zeroes": true, 00:05:14.160 "flush": true, 00:05:14.160 "reset": true, 00:05:14.160 "compare": false, 00:05:14.160 "compare_and_write": false, 00:05:14.160 "abort": true, 00:05:14.160 "nvme_admin": false, 00:05:14.160 "nvme_io": false 00:05:14.160 }, 00:05:14.160 "memory_domains": [ 00:05:14.160 { 00:05:14.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.160 "dma_device_type": 2 00:05:14.160 } 00:05:14.160 ], 00:05:14.160 "driver_specific": { 00:05:14.160 "passthru": { 00:05:14.160 "name": "Passthru0", 00:05:14.160 "base_bdev_name": "Malloc0" 00:05:14.160 } 00:05:14.160 } 00:05:14.160 } 00:05:14.160 ]' 00:05:14.160 22:47:42 -- rpc/rpc.sh@21 -- # jq length 00:05:14.160 22:47:42 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.160 22:47:42 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.160 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.160 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.420 22:47:42 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:14.420 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.420 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.420 22:47:42 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.420 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.420 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.420 22:47:42 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.420 22:47:42 -- rpc/rpc.sh@26 -- # jq length 00:05:14.420 22:47:42 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.420 00:05:14.420 real 0m0.266s 00:05:14.420 user 0m0.165s 00:05:14.420 sys 0m0.027s 00:05:14.420 22:47:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.420 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 ************************************ 00:05:14.420 END TEST rpc_integrity 00:05:14.420 ************************************ 00:05:14.420 22:47:42 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:14.420 22:47:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.420 22:47:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.420 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 ************************************ 00:05:14.420 START TEST rpc_plugins 00:05:14.420 ************************************ 00:05:14.420 22:47:42 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:14.420 22:47:42 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:14.420 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.420 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.420 22:47:42 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:14.420 22:47:42 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:14.420 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.420 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.420 22:47:42 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:14.420 { 00:05:14.420 "name": "Malloc1", 00:05:14.420 "aliases": [ 00:05:14.420 "e958b86a-c0df-44a6-ba66-fa6687db198c" 00:05:14.420 ], 00:05:14.420 "product_name": "Malloc disk", 00:05:14.420 "block_size": 4096, 00:05:14.420 "num_blocks": 256, 00:05:14.420 "uuid": "e958b86a-c0df-44a6-ba66-fa6687db198c", 00:05:14.420 "assigned_rate_limits": { 00:05:14.420 "rw_ios_per_sec": 0, 00:05:14.420 "rw_mbytes_per_sec": 0, 00:05:14.420 "r_mbytes_per_sec": 0, 00:05:14.420 "w_mbytes_per_sec": 0 00:05:14.420 }, 00:05:14.420 "claimed": false, 00:05:14.420 "zoned": false, 00:05:14.420 "supported_io_types": { 00:05:14.420 "read": true, 00:05:14.420 "write": true, 00:05:14.420 "unmap": true, 00:05:14.420 "write_zeroes": true, 00:05:14.420 "flush": true, 00:05:14.420 "reset": true, 00:05:14.420 "compare": false, 00:05:14.420 "compare_and_write": false, 00:05:14.420 "abort": true, 00:05:14.420 "nvme_admin": false, 00:05:14.420 "nvme_io": false 00:05:14.420 }, 00:05:14.420 "memory_domains": [ 00:05:14.420 { 00:05:14.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.420 "dma_device_type": 2 00:05:14.420 } 00:05:14.420 ], 00:05:14.420 "driver_specific": {} 00:05:14.420 } 00:05:14.420 ]' 00:05:14.420 22:47:42 -- rpc/rpc.sh@32 -- # jq length 00:05:14.420 22:47:42 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:14.420 22:47:42 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:14.420 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.420 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.420 22:47:42 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:14.420 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.420 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.420 22:47:42 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:14.420 22:47:42 -- rpc/rpc.sh@36 -- # jq length 00:05:14.420 22:47:42 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:14.420 00:05:14.420 real 0m0.141s 00:05:14.420 user 0m0.090s 00:05:14.420 sys 0m0.014s 00:05:14.420 22:47:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.420 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.420 ************************************ 00:05:14.420 END TEST rpc_plugins 00:05:14.420 ************************************ 00:05:14.682 22:47:42 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:14.682 22:47:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.682 22:47:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.682 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.682 ************************************ 00:05:14.682 START TEST rpc_trace_cmd_test 00:05:14.682 ************************************ 00:05:14.682 22:47:42 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:14.682 22:47:42 -- rpc/rpc.sh@40 -- # local info 00:05:14.682 22:47:42 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:14.682 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.682 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.682 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.682 22:47:42 -- rpc/rpc.sh@42 -- # info='{ 00:05:14.682 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3878429", 00:05:14.682 "tpoint_group_mask": "0x8", 00:05:14.682 "iscsi_conn": { 00:05:14.682 "mask": "0x2", 00:05:14.682 "tpoint_mask": "0x0" 00:05:14.682 }, 00:05:14.682 "scsi": { 00:05:14.682 "mask": "0x4", 00:05:14.682 "tpoint_mask": "0x0" 00:05:14.682 }, 00:05:14.682 "bdev": { 00:05:14.682 "mask": "0x8", 00:05:14.682 "tpoint_mask": "0xffffffffffffffff" 00:05:14.682 }, 00:05:14.682 "nvmf_rdma": { 00:05:14.682 "mask": "0x10", 00:05:14.682 "tpoint_mask": "0x0" 00:05:14.682 }, 00:05:14.682 "nvmf_tcp": { 00:05:14.682 "mask": "0x20", 00:05:14.682 "tpoint_mask": "0x0" 00:05:14.682 }, 00:05:14.682 "ftl": { 00:05:14.682 "mask": "0x40", 00:05:14.682 "tpoint_mask": "0x0" 00:05:14.682 }, 00:05:14.682 "blobfs": { 00:05:14.682 "mask": "0x80", 00:05:14.682 "tpoint_mask": "0x0" 00:05:14.682 }, 00:05:14.682 "dsa": { 00:05:14.682 "mask": "0x200", 00:05:14.682 "tpoint_mask": "0x0" 00:05:14.682 }, 00:05:14.682 "thread": { 00:05:14.682 "mask": "0x400", 00:05:14.682 "tpoint_mask": "0x0" 00:05:14.682 }, 00:05:14.682 "nvme_pcie": { 00:05:14.682 "mask": "0x800", 00:05:14.682 "tpoint_mask": "0x0" 00:05:14.682 }, 00:05:14.682 "iaa": { 00:05:14.682 "mask": "0x1000", 00:05:14.682 "tpoint_mask": "0x0" 00:05:14.682 }, 00:05:14.682 "nvme_tcp": { 00:05:14.682 "mask": "0x2000", 00:05:14.682 "tpoint_mask": "0x0" 00:05:14.682 }, 00:05:14.682 "bdev_nvme": { 00:05:14.682 "mask": "0x4000", 00:05:14.682 "tpoint_mask": "0x0" 00:05:14.682 } 00:05:14.682 }' 00:05:14.682 22:47:42 -- rpc/rpc.sh@43 -- # jq length 00:05:14.682 22:47:42 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:14.682 22:47:42 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:14.682 22:47:42 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:14.682 22:47:42 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:14.682 22:47:42 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:14.682 22:47:42 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:14.682 22:47:42 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:14.682 22:47:42 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:14.682 22:47:42 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:14.682 00:05:14.682 real 0m0.163s 00:05:14.682 user 0m0.135s 00:05:14.682 sys 0m0.020s 00:05:14.682 22:47:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.682 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.682 ************************************ 00:05:14.682 END TEST rpc_trace_cmd_test 00:05:14.682 ************************************ 00:05:14.682 22:47:42 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:14.682 22:47:42 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:14.682 22:47:42 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:14.682 22:47:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.682 22:47:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.682 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.682 ************************************ 00:05:14.682 START TEST rpc_daemon_integrity 00:05:14.682 ************************************ 00:05:14.682 22:47:42 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:14.682 22:47:42 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.682 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.682 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.682 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.682 22:47:42 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.682 22:47:42 -- rpc/rpc.sh@13 -- # jq length 00:05:14.944 22:47:42 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.944 22:47:42 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.944 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.944 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.944 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.944 22:47:42 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:14.944 22:47:42 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.944 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.944 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.944 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.944 22:47:42 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.944 { 00:05:14.944 "name": "Malloc2", 00:05:14.944 "aliases": [ 00:05:14.944 "a893108c-c928-491a-964c-c3b430eaf8f7" 00:05:14.944 ], 00:05:14.944 "product_name": "Malloc disk", 00:05:14.944 "block_size": 512, 00:05:14.944 "num_blocks": 16384, 00:05:14.944 "uuid": "a893108c-c928-491a-964c-c3b430eaf8f7", 00:05:14.944 "assigned_rate_limits": { 00:05:14.944 "rw_ios_per_sec": 0, 00:05:14.944 "rw_mbytes_per_sec": 0, 00:05:14.944 "r_mbytes_per_sec": 0, 00:05:14.944 "w_mbytes_per_sec": 0 00:05:14.944 }, 00:05:14.944 "claimed": false, 00:05:14.944 "zoned": false, 00:05:14.944 "supported_io_types": { 00:05:14.944 "read": true, 00:05:14.944 "write": true, 00:05:14.944 "unmap": true, 00:05:14.944 "write_zeroes": true, 00:05:14.944 "flush": true, 00:05:14.944 "reset": true, 00:05:14.944 "compare": false, 00:05:14.944 "compare_and_write": false, 00:05:14.944 "abort": true, 00:05:14.944 "nvme_admin": false, 00:05:14.944 "nvme_io": false 00:05:14.944 }, 00:05:14.944 "memory_domains": [ 00:05:14.944 { 00:05:14.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.944 "dma_device_type": 2 00:05:14.944 } 00:05:14.944 ], 00:05:14.944 "driver_specific": {} 00:05:14.944 } 00:05:14.944 ]' 00:05:14.944 22:47:42 -- rpc/rpc.sh@17 -- # jq length 00:05:14.944 22:47:42 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.944 22:47:42 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:14.944 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.944 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.944 [2024-06-09 22:47:42.981769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:14.944 [2024-06-09 22:47:42.981801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.944 [2024-06-09 22:47:42.981815] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x219ff00 00:05:14.944 [2024-06-09 22:47:42.981822] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.944 [2024-06-09 22:47:42.983027] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.944 [2024-06-09 22:47:42.983047] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.944 Passthru0 00:05:14.944 22:47:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.944 22:47:42 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.944 22:47:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.944 22:47:42 -- common/autotest_common.sh@10 -- # set +x 00:05:14.944 22:47:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.944 22:47:43 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.944 { 00:05:14.944 "name": "Malloc2", 00:05:14.944 "aliases": [ 00:05:14.944 "a893108c-c928-491a-964c-c3b430eaf8f7" 00:05:14.944 ], 00:05:14.944 "product_name": "Malloc disk", 00:05:14.944 "block_size": 512, 00:05:14.944 "num_blocks": 16384, 00:05:14.944 "uuid": "a893108c-c928-491a-964c-c3b430eaf8f7", 00:05:14.944 "assigned_rate_limits": { 00:05:14.944 "rw_ios_per_sec": 0, 00:05:14.944 "rw_mbytes_per_sec": 0, 00:05:14.944 "r_mbytes_per_sec": 0, 00:05:14.945 "w_mbytes_per_sec": 0 00:05:14.945 }, 00:05:14.945 "claimed": true, 00:05:14.945 "claim_type": "exclusive_write", 00:05:14.945 "zoned": false, 00:05:14.945 "supported_io_types": { 00:05:14.945 "read": true, 00:05:14.945 "write": true, 00:05:14.945 "unmap": true, 00:05:14.945 "write_zeroes": true, 00:05:14.945 "flush": true, 00:05:14.945 "reset": true, 00:05:14.945 "compare": false, 00:05:14.945 "compare_and_write": false, 00:05:14.945 "abort": true, 00:05:14.945 "nvme_admin": false, 00:05:14.945 "nvme_io": false 00:05:14.945 }, 00:05:14.945 "memory_domains": [ 00:05:14.945 { 00:05:14.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.945 "dma_device_type": 2 00:05:14.945 } 00:05:14.945 ], 00:05:14.945 "driver_specific": {} 00:05:14.945 }, 00:05:14.945 { 00:05:14.945 "name": "Passthru0", 00:05:14.945 "aliases": [ 00:05:14.945 "45730b96-017b-5aac-9f13-d564e659de09" 00:05:14.945 ], 00:05:14.945 "product_name": "passthru", 00:05:14.945 "block_size": 512, 00:05:14.945 "num_blocks": 16384, 00:05:14.945 "uuid": "45730b96-017b-5aac-9f13-d564e659de09", 00:05:14.945 "assigned_rate_limits": { 00:05:14.945 "rw_ios_per_sec": 0, 00:05:14.945 "rw_mbytes_per_sec": 0, 00:05:14.945 "r_mbytes_per_sec": 0, 00:05:14.945 "w_mbytes_per_sec": 0 00:05:14.945 }, 00:05:14.945 "claimed": false, 00:05:14.945 "zoned": false, 00:05:14.945 "supported_io_types": { 00:05:14.945 "read": true, 00:05:14.945 "write": true, 00:05:14.945 "unmap": true, 00:05:14.945 "write_zeroes": true, 00:05:14.945 "flush": true, 00:05:14.945 "reset": true, 00:05:14.945 "compare": false, 00:05:14.945 "compare_and_write": false, 00:05:14.945 "abort": true, 00:05:14.945 "nvme_admin": false, 00:05:14.945 "nvme_io": false 00:05:14.945 }, 00:05:14.945 "memory_domains": [ 00:05:14.945 { 00:05:14.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.945 "dma_device_type": 2 00:05:14.945 } 00:05:14.945 ], 00:05:14.945 "driver_specific": { 00:05:14.945 "passthru": { 00:05:14.945 "name": "Passthru0", 00:05:14.945 "base_bdev_name": "Malloc2" 00:05:14.945 } 00:05:14.945 } 00:05:14.945 } 00:05:14.945 ]' 00:05:14.945 22:47:43 -- rpc/rpc.sh@21 -- # jq length 00:05:14.945 22:47:43 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.945 22:47:43 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.945 22:47:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.945 22:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:14.945 22:47:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.945 22:47:43 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:14.945 22:47:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.945 22:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:14.945 22:47:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.945 22:47:43 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.945 22:47:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:14.945 22:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:14.945 22:47:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:14.945 22:47:43 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.945 22:47:43 -- rpc/rpc.sh@26 -- # jq length 00:05:14.945 22:47:43 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.945 00:05:14.945 real 0m0.259s 00:05:14.945 user 0m0.167s 00:05:14.945 sys 0m0.031s 00:05:14.945 22:47:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.945 22:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:14.945 ************************************ 00:05:14.945 END TEST rpc_daemon_integrity 00:05:14.945 ************************************ 00:05:15.206 22:47:43 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:15.206 22:47:43 -- rpc/rpc.sh@84 -- # killprocess 3878429 00:05:15.206 22:47:43 -- common/autotest_common.sh@926 -- # '[' -z 3878429 ']' 00:05:15.206 22:47:43 -- common/autotest_common.sh@930 -- # kill -0 3878429 00:05:15.206 22:47:43 -- common/autotest_common.sh@931 -- # uname 00:05:15.206 22:47:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:15.206 22:47:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3878429 00:05:15.206 22:47:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:15.206 22:47:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:15.206 22:47:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3878429' 00:05:15.206 killing process with pid 3878429 00:05:15.206 22:47:43 -- common/autotest_common.sh@945 -- # kill 3878429 00:05:15.206 22:47:43 -- common/autotest_common.sh@950 -- # wait 3878429 00:05:15.468 00:05:15.468 real 0m2.193s 00:05:15.468 user 0m2.842s 00:05:15.468 sys 0m0.565s 00:05:15.468 22:47:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.468 22:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:15.468 ************************************ 00:05:15.468 END TEST rpc 00:05:15.468 ************************************ 00:05:15.468 22:47:43 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:15.468 22:47:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.468 22:47:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.468 22:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:15.468 ************************************ 00:05:15.468 START TEST rpc_client 00:05:15.468 ************************************ 00:05:15.468 22:47:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:15.468 * Looking for test storage... 00:05:15.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:15.468 22:47:43 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:15.468 OK 00:05:15.468 22:47:43 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:15.468 00:05:15.468 real 0m0.119s 00:05:15.468 user 0m0.051s 00:05:15.468 sys 0m0.076s 00:05:15.468 22:47:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.468 22:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:15.468 ************************************ 00:05:15.468 END TEST rpc_client 00:05:15.468 ************************************ 00:05:15.468 22:47:43 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:15.468 22:47:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.468 22:47:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.468 22:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:15.468 ************************************ 00:05:15.468 START TEST json_config 00:05:15.468 ************************************ 00:05:15.468 22:47:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:15.730 22:47:43 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:15.730 22:47:43 -- nvmf/common.sh@7 -- # uname -s 00:05:15.730 22:47:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.730 22:47:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.730 22:47:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.730 22:47:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.730 22:47:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.730 22:47:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.730 22:47:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.730 22:47:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.730 22:47:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.730 22:47:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.730 22:47:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:15.730 22:47:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:15.730 22:47:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.730 22:47:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.730 22:47:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.730 22:47:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:15.730 22:47:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.730 22:47:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.730 22:47:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.730 22:47:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.730 22:47:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.730 22:47:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.730 22:47:43 -- paths/export.sh@5 -- # export PATH 00:05:15.730 22:47:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.730 22:47:43 -- nvmf/common.sh@46 -- # : 0 00:05:15.730 22:47:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:15.730 22:47:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:15.730 22:47:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:15.730 22:47:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.731 22:47:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.731 22:47:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:15.731 22:47:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:15.731 22:47:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:15.731 22:47:43 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:15.731 22:47:43 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:15.731 22:47:43 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:15.731 22:47:43 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:15.731 22:47:43 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:15.731 22:47:43 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:15.731 22:47:43 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:15.731 22:47:43 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:15.731 22:47:43 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:15.731 22:47:43 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:15.731 22:47:43 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:15.731 22:47:43 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:15.731 22:47:43 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:15.731 22:47:43 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.731 22:47:43 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:15.731 INFO: JSON configuration test init 00:05:15.731 22:47:43 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:15.731 22:47:43 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:15.731 22:47:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:15.731 22:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:15.731 22:47:43 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:15.731 22:47:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:15.731 22:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:15.731 22:47:43 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:15.731 22:47:43 -- json_config/json_config.sh@98 -- # local app=target 00:05:15.731 22:47:43 -- json_config/json_config.sh@99 -- # shift 00:05:15.731 22:47:43 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:15.731 22:47:43 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:15.731 22:47:43 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:15.731 22:47:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:15.731 22:47:43 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:15.731 22:47:43 -- json_config/json_config.sh@111 -- # app_pid[$app]=3879002 00:05:15.731 22:47:43 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:15.731 Waiting for target to run... 00:05:15.731 22:47:43 -- json_config/json_config.sh@114 -- # waitforlisten 3879002 /var/tmp/spdk_tgt.sock 00:05:15.731 22:47:43 -- common/autotest_common.sh@819 -- # '[' -z 3879002 ']' 00:05:15.731 22:47:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.731 22:47:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:15.731 22:47:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.731 22:47:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:15.731 22:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:15.731 22:47:43 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:15.731 [2024-06-09 22:47:43.735785] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:15.731 [2024-06-09 22:47:43.735843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3879002 ] 00:05:15.731 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.992 [2024-06-09 22:47:43.978522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.992 [2024-06-09 22:47:44.028919] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:15.992 [2024-06-09 22:47:44.029037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.565 22:47:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:16.565 22:47:44 -- common/autotest_common.sh@852 -- # return 0 00:05:16.565 22:47:44 -- json_config/json_config.sh@115 -- # echo '' 00:05:16.565 00:05:16.565 22:47:44 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:16.565 22:47:44 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:16.565 22:47:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:16.565 22:47:44 -- common/autotest_common.sh@10 -- # set +x 00:05:16.565 22:47:44 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:16.565 22:47:44 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:16.565 22:47:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:16.565 22:47:44 -- common/autotest_common.sh@10 -- # set +x 00:05:16.565 22:47:44 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:16.565 22:47:44 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:16.565 22:47:44 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:17.137 22:47:45 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:17.137 22:47:45 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:17.137 22:47:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:17.137 22:47:45 -- common/autotest_common.sh@10 -- # set +x 00:05:17.137 22:47:45 -- json_config/json_config.sh@48 -- # local ret=0 00:05:17.137 22:47:45 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:17.137 22:47:45 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:17.137 22:47:45 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:17.137 22:47:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:17.137 22:47:45 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:17.137 22:47:45 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:17.137 22:47:45 -- json_config/json_config.sh@51 -- # local get_types 00:05:17.137 22:47:45 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:17.137 22:47:45 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:17.137 22:47:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:17.137 22:47:45 -- common/autotest_common.sh@10 -- # set +x 00:05:17.137 22:47:45 -- json_config/json_config.sh@58 -- # return 0 00:05:17.137 22:47:45 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:17.137 22:47:45 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:17.137 22:47:45 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:17.137 22:47:45 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:17.137 22:47:45 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:17.137 22:47:45 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:17.137 22:47:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:17.137 22:47:45 -- common/autotest_common.sh@10 -- # set +x 00:05:17.137 22:47:45 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:17.137 22:47:45 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:17.137 22:47:45 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:17.137 22:47:45 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.137 22:47:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.398 MallocForNvmf0 00:05:17.398 22:47:45 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.398 22:47:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.398 MallocForNvmf1 00:05:17.398 22:47:45 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:17.398 22:47:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:17.658 [2024-06-09 22:47:45.671961] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.658 22:47:45 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:17.659 22:47:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:17.920 22:47:45 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:17.920 22:47:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:17.920 22:47:45 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:17.920 22:47:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:18.181 22:47:46 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:18.181 22:47:46 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:18.181 [2024-06-09 22:47:46.241931] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:18.181 22:47:46 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:18.181 22:47:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:18.181 22:47:46 -- common/autotest_common.sh@10 -- # set +x 00:05:18.181 22:47:46 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:18.181 22:47:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:18.181 22:47:46 -- common/autotest_common.sh@10 -- # set +x 00:05:18.181 22:47:46 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:18.181 22:47:46 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.181 22:47:46 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.441 MallocBdevForConfigChangeCheck 00:05:18.441 22:47:46 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:18.441 22:47:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:18.441 22:47:46 -- common/autotest_common.sh@10 -- # set +x 00:05:18.441 22:47:46 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:18.441 22:47:46 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.701 22:47:46 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:18.701 INFO: shutting down applications... 00:05:18.701 22:47:46 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:18.701 22:47:46 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:18.701 22:47:46 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:18.701 22:47:46 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:19.272 Calling clear_iscsi_subsystem 00:05:19.272 Calling clear_nvmf_subsystem 00:05:19.272 Calling clear_nbd_subsystem 00:05:19.272 Calling clear_ublk_subsystem 00:05:19.272 Calling clear_vhost_blk_subsystem 00:05:19.272 Calling clear_vhost_scsi_subsystem 00:05:19.272 Calling clear_scheduler_subsystem 00:05:19.272 Calling clear_bdev_subsystem 00:05:19.272 Calling clear_accel_subsystem 00:05:19.272 Calling clear_vmd_subsystem 00:05:19.272 Calling clear_sock_subsystem 00:05:19.272 Calling clear_iobuf_subsystem 00:05:19.272 22:47:47 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:19.272 22:47:47 -- json_config/json_config.sh@396 -- # count=100 00:05:19.272 22:47:47 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:19.272 22:47:47 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.272 22:47:47 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:19.272 22:47:47 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:19.272 22:47:47 -- json_config/json_config.sh@398 -- # break 00:05:19.272 22:47:47 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:19.272 22:47:47 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:19.272 22:47:47 -- json_config/json_config.sh@120 -- # local app=target 00:05:19.272 22:47:47 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:19.272 22:47:47 -- json_config/json_config.sh@124 -- # [[ -n 3879002 ]] 00:05:19.272 22:47:47 -- json_config/json_config.sh@127 -- # kill -SIGINT 3879002 00:05:19.272 22:47:47 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:19.272 22:47:47 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:19.272 22:47:47 -- json_config/json_config.sh@130 -- # kill -0 3879002 00:05:19.533 22:47:47 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:19.794 22:47:47 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:19.794 22:47:47 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:19.794 22:47:47 -- json_config/json_config.sh@130 -- # kill -0 3879002 00:05:19.794 22:47:47 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:19.794 22:47:47 -- json_config/json_config.sh@132 -- # break 00:05:19.794 22:47:47 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:19.794 22:47:47 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:19.794 SPDK target shutdown done 00:05:19.794 22:47:47 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:19.794 INFO: relaunching applications... 00:05:19.794 22:47:47 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.794 22:47:47 -- json_config/json_config.sh@98 -- # local app=target 00:05:19.794 22:47:47 -- json_config/json_config.sh@99 -- # shift 00:05:19.794 22:47:47 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:19.794 22:47:47 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:19.794 22:47:47 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:19.794 22:47:47 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:19.794 22:47:47 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:19.794 22:47:47 -- json_config/json_config.sh@111 -- # app_pid[$app]=3880082 00:05:19.794 22:47:47 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:19.794 Waiting for target to run... 00:05:19.794 22:47:47 -- json_config/json_config.sh@114 -- # waitforlisten 3880082 /var/tmp/spdk_tgt.sock 00:05:19.794 22:47:47 -- common/autotest_common.sh@819 -- # '[' -z 3880082 ']' 00:05:19.794 22:47:47 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.794 22:47:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.794 22:47:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:19.794 22:47:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.794 22:47:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:19.794 22:47:47 -- common/autotest_common.sh@10 -- # set +x 00:05:20.055 [2024-06-09 22:47:48.011611] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:20.055 [2024-06-09 22:47:48.011673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3880082 ] 00:05:20.055 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.316 [2024-06-09 22:47:48.302684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.316 [2024-06-09 22:47:48.352645] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.316 [2024-06-09 22:47:48.352767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.887 [2024-06-09 22:47:48.841883] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.887 [2024-06-09 22:47:48.874255] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:21.459 22:47:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:21.459 22:47:49 -- common/autotest_common.sh@852 -- # return 0 00:05:21.459 22:47:49 -- json_config/json_config.sh@115 -- # echo '' 00:05:21.459 00:05:21.459 22:47:49 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:21.459 22:47:49 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:21.459 INFO: Checking if target configuration is the same... 00:05:21.459 22:47:49 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.459 22:47:49 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:21.459 22:47:49 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.459 + '[' 2 -ne 2 ']' 00:05:21.459 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:21.459 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:21.459 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:21.459 +++ basename /dev/fd/62 00:05:21.459 ++ mktemp /tmp/62.XXX 00:05:21.459 + tmp_file_1=/tmp/62.hni 00:05:21.459 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.459 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:21.459 + tmp_file_2=/tmp/spdk_tgt_config.json.JnH 00:05:21.459 + ret=0 00:05:21.459 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.720 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.720 + diff -u /tmp/62.hni /tmp/spdk_tgt_config.json.JnH 00:05:21.720 + echo 'INFO: JSON config files are the same' 00:05:21.720 INFO: JSON config files are the same 00:05:21.720 + rm /tmp/62.hni /tmp/spdk_tgt_config.json.JnH 00:05:21.720 + exit 0 00:05:21.720 22:47:49 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:21.720 22:47:49 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:21.720 INFO: changing configuration and checking if this can be detected... 00:05:21.720 22:47:49 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:21.720 22:47:49 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:21.720 22:47:49 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.720 22:47:49 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:21.721 22:47:49 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.721 + '[' 2 -ne 2 ']' 00:05:21.721 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:21.721 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:21.721 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:21.721 +++ basename /dev/fd/62 00:05:21.982 ++ mktemp /tmp/62.XXX 00:05:21.982 + tmp_file_1=/tmp/62.Ho6 00:05:21.982 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.982 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:21.982 + tmp_file_2=/tmp/spdk_tgt_config.json.lIh 00:05:21.982 + ret=0 00:05:21.982 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.982 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.243 + diff -u /tmp/62.Ho6 /tmp/spdk_tgt_config.json.lIh 00:05:22.243 + ret=1 00:05:22.243 + echo '=== Start of file: /tmp/62.Ho6 ===' 00:05:22.243 + cat /tmp/62.Ho6 00:05:22.243 + echo '=== End of file: /tmp/62.Ho6 ===' 00:05:22.243 + echo '' 00:05:22.243 + echo '=== Start of file: /tmp/spdk_tgt_config.json.lIh ===' 00:05:22.243 + cat /tmp/spdk_tgt_config.json.lIh 00:05:22.243 + echo '=== End of file: /tmp/spdk_tgt_config.json.lIh ===' 00:05:22.243 + echo '' 00:05:22.243 + rm /tmp/62.Ho6 /tmp/spdk_tgt_config.json.lIh 00:05:22.243 + exit 1 00:05:22.243 22:47:50 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:22.243 INFO: configuration change detected. 00:05:22.243 22:47:50 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:22.243 22:47:50 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:22.243 22:47:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:22.243 22:47:50 -- common/autotest_common.sh@10 -- # set +x 00:05:22.243 22:47:50 -- json_config/json_config.sh@360 -- # local ret=0 00:05:22.243 22:47:50 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:22.243 22:47:50 -- json_config/json_config.sh@370 -- # [[ -n 3880082 ]] 00:05:22.243 22:47:50 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:22.243 22:47:50 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:22.243 22:47:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:22.243 22:47:50 -- common/autotest_common.sh@10 -- # set +x 00:05:22.243 22:47:50 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:22.243 22:47:50 -- json_config/json_config.sh@246 -- # uname -s 00:05:22.243 22:47:50 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:22.243 22:47:50 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:22.243 22:47:50 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:22.243 22:47:50 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:22.243 22:47:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:22.243 22:47:50 -- common/autotest_common.sh@10 -- # set +x 00:05:22.243 22:47:50 -- json_config/json_config.sh@376 -- # killprocess 3880082 00:05:22.243 22:47:50 -- common/autotest_common.sh@926 -- # '[' -z 3880082 ']' 00:05:22.243 22:47:50 -- common/autotest_common.sh@930 -- # kill -0 3880082 00:05:22.243 22:47:50 -- common/autotest_common.sh@931 -- # uname 00:05:22.243 22:47:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:22.243 22:47:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3880082 00:05:22.243 22:47:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:22.243 22:47:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:22.243 22:47:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3880082' 00:05:22.243 killing process with pid 3880082 00:05:22.243 22:47:50 -- common/autotest_common.sh@945 -- # kill 3880082 00:05:22.243 22:47:50 -- common/autotest_common.sh@950 -- # wait 3880082 00:05:22.505 22:47:50 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.505 22:47:50 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:22.505 22:47:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:22.505 22:47:50 -- common/autotest_common.sh@10 -- # set +x 00:05:22.505 22:47:50 -- json_config/json_config.sh@381 -- # return 0 00:05:22.505 22:47:50 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:22.505 INFO: Success 00:05:22.505 00:05:22.505 real 0m7.028s 00:05:22.505 user 0m8.383s 00:05:22.505 sys 0m1.649s 00:05:22.505 22:47:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.505 22:47:50 -- common/autotest_common.sh@10 -- # set +x 00:05:22.505 ************************************ 00:05:22.505 END TEST json_config 00:05:22.505 ************************************ 00:05:22.505 22:47:50 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:22.505 22:47:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.505 22:47:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.505 22:47:50 -- common/autotest_common.sh@10 -- # set +x 00:05:22.505 ************************************ 00:05:22.505 START TEST json_config_extra_key 00:05:22.505 ************************************ 00:05:22.505 22:47:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:22.767 22:47:50 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:22.767 22:47:50 -- nvmf/common.sh@7 -- # uname -s 00:05:22.767 22:47:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:22.767 22:47:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:22.767 22:47:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:22.767 22:47:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:22.767 22:47:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:22.767 22:47:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:22.767 22:47:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:22.767 22:47:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:22.767 22:47:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:22.767 22:47:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:22.767 22:47:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:22.767 22:47:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:22.767 22:47:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:22.767 22:47:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:22.767 22:47:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:22.767 22:47:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:22.767 22:47:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:22.767 22:47:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:22.767 22:47:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:22.767 22:47:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.767 22:47:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.767 22:47:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.767 22:47:50 -- paths/export.sh@5 -- # export PATH 00:05:22.767 22:47:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.767 22:47:50 -- nvmf/common.sh@46 -- # : 0 00:05:22.768 22:47:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:22.768 22:47:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:22.768 22:47:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:22.768 22:47:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:22.768 22:47:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:22.768 22:47:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:22.768 22:47:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:22.768 22:47:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:22.768 INFO: launching applications... 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3880622 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:22.768 Waiting for target to run... 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3880622 /var/tmp/spdk_tgt.sock 00:05:22.768 22:47:50 -- common/autotest_common.sh@819 -- # '[' -z 3880622 ']' 00:05:22.768 22:47:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:22.768 22:47:50 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:22.768 22:47:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:22.768 22:47:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:22.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:22.768 22:47:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:22.768 22:47:50 -- common/autotest_common.sh@10 -- # set +x 00:05:22.768 [2024-06-09 22:47:50.826570] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:22.768 [2024-06-09 22:47:50.826647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3880622 ] 00:05:22.768 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.029 [2024-06-09 22:47:51.071650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.029 [2024-06-09 22:47:51.120985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:23.029 [2024-06-09 22:47:51.121106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.601 22:47:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:23.601 22:47:51 -- common/autotest_common.sh@852 -- # return 0 00:05:23.601 22:47:51 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:23.601 00:05:23.601 22:47:51 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:23.601 INFO: shutting down applications... 00:05:23.601 22:47:51 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:23.601 22:47:51 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:23.601 22:47:51 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:23.601 22:47:51 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3880622 ]] 00:05:23.601 22:47:51 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3880622 00:05:23.601 22:47:51 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:23.601 22:47:51 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:23.601 22:47:51 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3880622 00:05:23.601 22:47:51 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:24.173 22:47:52 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:24.173 22:47:52 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:24.173 22:47:52 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3880622 00:05:24.173 22:47:52 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:24.173 22:47:52 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:24.173 22:47:52 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:24.173 22:47:52 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:24.173 SPDK target shutdown done 00:05:24.173 22:47:52 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:24.173 Success 00:05:24.173 00:05:24.173 real 0m1.413s 00:05:24.173 user 0m1.087s 00:05:24.173 sys 0m0.325s 00:05:24.173 22:47:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.173 22:47:52 -- common/autotest_common.sh@10 -- # set +x 00:05:24.173 ************************************ 00:05:24.173 END TEST json_config_extra_key 00:05:24.173 ************************************ 00:05:24.173 22:47:52 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:24.174 22:47:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.174 22:47:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.174 22:47:52 -- common/autotest_common.sh@10 -- # set +x 00:05:24.174 ************************************ 00:05:24.174 START TEST alias_rpc 00:05:24.174 ************************************ 00:05:24.174 22:47:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:24.174 * Looking for test storage... 00:05:24.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:24.174 22:47:52 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:24.174 22:47:52 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3880991 00:05:24.174 22:47:52 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3880991 00:05:24.174 22:47:52 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.174 22:47:52 -- common/autotest_common.sh@819 -- # '[' -z 3880991 ']' 00:05:24.174 22:47:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.174 22:47:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:24.174 22:47:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.174 22:47:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:24.174 22:47:52 -- common/autotest_common.sh@10 -- # set +x 00:05:24.174 [2024-06-09 22:47:52.275191] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:24.174 [2024-06-09 22:47:52.275253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3880991 ] 00:05:24.174 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.174 [2024-06-09 22:47:52.335932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.433 [2024-06-09 22:47:52.401570] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:24.433 [2024-06-09 22:47:52.401695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.005 22:47:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:25.005 22:47:53 -- common/autotest_common.sh@852 -- # return 0 00:05:25.005 22:47:53 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:25.266 22:47:53 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3880991 00:05:25.266 22:47:53 -- common/autotest_common.sh@926 -- # '[' -z 3880991 ']' 00:05:25.266 22:47:53 -- common/autotest_common.sh@930 -- # kill -0 3880991 00:05:25.266 22:47:53 -- common/autotest_common.sh@931 -- # uname 00:05:25.266 22:47:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:25.266 22:47:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3880991 00:05:25.266 22:47:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:25.266 22:47:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:25.266 22:47:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3880991' 00:05:25.266 killing process with pid 3880991 00:05:25.266 22:47:53 -- common/autotest_common.sh@945 -- # kill 3880991 00:05:25.266 22:47:53 -- common/autotest_common.sh@950 -- # wait 3880991 00:05:25.527 00:05:25.527 real 0m1.345s 00:05:25.527 user 0m1.476s 00:05:25.527 sys 0m0.357s 00:05:25.527 22:47:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.527 22:47:53 -- common/autotest_common.sh@10 -- # set +x 00:05:25.527 ************************************ 00:05:25.527 END TEST alias_rpc 00:05:25.527 ************************************ 00:05:25.527 22:47:53 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:25.527 22:47:53 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:25.527 22:47:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.527 22:47:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.527 22:47:53 -- common/autotest_common.sh@10 -- # set +x 00:05:25.527 ************************************ 00:05:25.527 START TEST spdkcli_tcp 00:05:25.527 ************************************ 00:05:25.527 22:47:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:25.527 * Looking for test storage... 00:05:25.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:25.527 22:47:53 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:25.527 22:47:53 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:25.527 22:47:53 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:25.527 22:47:53 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:25.527 22:47:53 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:25.527 22:47:53 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:25.527 22:47:53 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:25.527 22:47:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:25.527 22:47:53 -- common/autotest_common.sh@10 -- # set +x 00:05:25.527 22:47:53 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3881380 00:05:25.527 22:47:53 -- spdkcli/tcp.sh@27 -- # waitforlisten 3881380 00:05:25.527 22:47:53 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:25.527 22:47:53 -- common/autotest_common.sh@819 -- # '[' -z 3881380 ']' 00:05:25.527 22:47:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.527 22:47:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:25.527 22:47:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.527 22:47:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:25.527 22:47:53 -- common/autotest_common.sh@10 -- # set +x 00:05:25.527 [2024-06-09 22:47:53.672348] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:25.527 [2024-06-09 22:47:53.672428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881380 ] 00:05:25.527 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.788 [2024-06-09 22:47:53.737119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.788 [2024-06-09 22:47:53.806882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.788 [2024-06-09 22:47:53.807110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.788 [2024-06-09 22:47:53.807115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.360 22:47:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.360 22:47:54 -- common/autotest_common.sh@852 -- # return 0 00:05:26.360 22:47:54 -- spdkcli/tcp.sh@31 -- # socat_pid=3881544 00:05:26.360 22:47:54 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:26.360 22:47:54 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:26.621 [ 00:05:26.621 "bdev_malloc_delete", 00:05:26.621 "bdev_malloc_create", 00:05:26.621 "bdev_null_resize", 00:05:26.621 "bdev_null_delete", 00:05:26.621 "bdev_null_create", 00:05:26.621 "bdev_nvme_cuse_unregister", 00:05:26.621 "bdev_nvme_cuse_register", 00:05:26.621 "bdev_opal_new_user", 00:05:26.621 "bdev_opal_set_lock_state", 00:05:26.621 "bdev_opal_delete", 00:05:26.621 "bdev_opal_get_info", 00:05:26.621 "bdev_opal_create", 00:05:26.621 "bdev_nvme_opal_revert", 00:05:26.621 "bdev_nvme_opal_init", 00:05:26.621 "bdev_nvme_send_cmd", 00:05:26.621 "bdev_nvme_get_path_iostat", 00:05:26.621 "bdev_nvme_get_mdns_discovery_info", 00:05:26.621 "bdev_nvme_stop_mdns_discovery", 00:05:26.621 "bdev_nvme_start_mdns_discovery", 00:05:26.621 "bdev_nvme_set_multipath_policy", 00:05:26.621 "bdev_nvme_set_preferred_path", 00:05:26.621 "bdev_nvme_get_io_paths", 00:05:26.621 "bdev_nvme_remove_error_injection", 00:05:26.621 "bdev_nvme_add_error_injection", 00:05:26.621 "bdev_nvme_get_discovery_info", 00:05:26.621 "bdev_nvme_stop_discovery", 00:05:26.621 "bdev_nvme_start_discovery", 00:05:26.621 "bdev_nvme_get_controller_health_info", 00:05:26.621 "bdev_nvme_disable_controller", 00:05:26.621 "bdev_nvme_enable_controller", 00:05:26.621 "bdev_nvme_reset_controller", 00:05:26.621 "bdev_nvme_get_transport_statistics", 00:05:26.621 "bdev_nvme_apply_firmware", 00:05:26.621 "bdev_nvme_detach_controller", 00:05:26.621 "bdev_nvme_get_controllers", 00:05:26.621 "bdev_nvme_attach_controller", 00:05:26.621 "bdev_nvme_set_hotplug", 00:05:26.621 "bdev_nvme_set_options", 00:05:26.621 "bdev_passthru_delete", 00:05:26.621 "bdev_passthru_create", 00:05:26.621 "bdev_lvol_grow_lvstore", 00:05:26.621 "bdev_lvol_get_lvols", 00:05:26.621 "bdev_lvol_get_lvstores", 00:05:26.621 "bdev_lvol_delete", 00:05:26.621 "bdev_lvol_set_read_only", 00:05:26.621 "bdev_lvol_resize", 00:05:26.621 "bdev_lvol_decouple_parent", 00:05:26.621 "bdev_lvol_inflate", 00:05:26.621 "bdev_lvol_rename", 00:05:26.621 "bdev_lvol_clone_bdev", 00:05:26.621 "bdev_lvol_clone", 00:05:26.621 "bdev_lvol_snapshot", 00:05:26.621 "bdev_lvol_create", 00:05:26.621 "bdev_lvol_delete_lvstore", 00:05:26.621 "bdev_lvol_rename_lvstore", 00:05:26.621 "bdev_lvol_create_lvstore", 00:05:26.621 "bdev_raid_set_options", 00:05:26.621 "bdev_raid_remove_base_bdev", 00:05:26.621 "bdev_raid_add_base_bdev", 00:05:26.621 "bdev_raid_delete", 00:05:26.621 "bdev_raid_create", 00:05:26.621 "bdev_raid_get_bdevs", 00:05:26.621 "bdev_error_inject_error", 00:05:26.621 "bdev_error_delete", 00:05:26.621 "bdev_error_create", 00:05:26.621 "bdev_split_delete", 00:05:26.621 "bdev_split_create", 00:05:26.621 "bdev_delay_delete", 00:05:26.621 "bdev_delay_create", 00:05:26.622 "bdev_delay_update_latency", 00:05:26.622 "bdev_zone_block_delete", 00:05:26.622 "bdev_zone_block_create", 00:05:26.622 "blobfs_create", 00:05:26.622 "blobfs_detect", 00:05:26.622 "blobfs_set_cache_size", 00:05:26.622 "bdev_aio_delete", 00:05:26.622 "bdev_aio_rescan", 00:05:26.622 "bdev_aio_create", 00:05:26.622 "bdev_ftl_set_property", 00:05:26.622 "bdev_ftl_get_properties", 00:05:26.622 "bdev_ftl_get_stats", 00:05:26.622 "bdev_ftl_unmap", 00:05:26.622 "bdev_ftl_unload", 00:05:26.622 "bdev_ftl_delete", 00:05:26.622 "bdev_ftl_load", 00:05:26.622 "bdev_ftl_create", 00:05:26.622 "bdev_virtio_attach_controller", 00:05:26.622 "bdev_virtio_scsi_get_devices", 00:05:26.622 "bdev_virtio_detach_controller", 00:05:26.622 "bdev_virtio_blk_set_hotplug", 00:05:26.622 "bdev_iscsi_delete", 00:05:26.622 "bdev_iscsi_create", 00:05:26.622 "bdev_iscsi_set_options", 00:05:26.622 "accel_error_inject_error", 00:05:26.622 "ioat_scan_accel_module", 00:05:26.622 "dsa_scan_accel_module", 00:05:26.622 "iaa_scan_accel_module", 00:05:26.622 "iscsi_set_options", 00:05:26.622 "iscsi_get_auth_groups", 00:05:26.622 "iscsi_auth_group_remove_secret", 00:05:26.622 "iscsi_auth_group_add_secret", 00:05:26.622 "iscsi_delete_auth_group", 00:05:26.622 "iscsi_create_auth_group", 00:05:26.622 "iscsi_set_discovery_auth", 00:05:26.622 "iscsi_get_options", 00:05:26.622 "iscsi_target_node_request_logout", 00:05:26.622 "iscsi_target_node_set_redirect", 00:05:26.622 "iscsi_target_node_set_auth", 00:05:26.622 "iscsi_target_node_add_lun", 00:05:26.622 "iscsi_get_connections", 00:05:26.622 "iscsi_portal_group_set_auth", 00:05:26.622 "iscsi_start_portal_group", 00:05:26.622 "iscsi_delete_portal_group", 00:05:26.622 "iscsi_create_portal_group", 00:05:26.622 "iscsi_get_portal_groups", 00:05:26.622 "iscsi_delete_target_node", 00:05:26.622 "iscsi_target_node_remove_pg_ig_maps", 00:05:26.622 "iscsi_target_node_add_pg_ig_maps", 00:05:26.622 "iscsi_create_target_node", 00:05:26.622 "iscsi_get_target_nodes", 00:05:26.622 "iscsi_delete_initiator_group", 00:05:26.622 "iscsi_initiator_group_remove_initiators", 00:05:26.622 "iscsi_initiator_group_add_initiators", 00:05:26.622 "iscsi_create_initiator_group", 00:05:26.622 "iscsi_get_initiator_groups", 00:05:26.622 "nvmf_set_crdt", 00:05:26.622 "nvmf_set_config", 00:05:26.622 "nvmf_set_max_subsystems", 00:05:26.622 "nvmf_subsystem_get_listeners", 00:05:26.622 "nvmf_subsystem_get_qpairs", 00:05:26.622 "nvmf_subsystem_get_controllers", 00:05:26.622 "nvmf_get_stats", 00:05:26.622 "nvmf_get_transports", 00:05:26.622 "nvmf_create_transport", 00:05:26.622 "nvmf_get_targets", 00:05:26.622 "nvmf_delete_target", 00:05:26.622 "nvmf_create_target", 00:05:26.622 "nvmf_subsystem_allow_any_host", 00:05:26.622 "nvmf_subsystem_remove_host", 00:05:26.622 "nvmf_subsystem_add_host", 00:05:26.622 "nvmf_subsystem_remove_ns", 00:05:26.622 "nvmf_subsystem_add_ns", 00:05:26.622 "nvmf_subsystem_listener_set_ana_state", 00:05:26.622 "nvmf_discovery_get_referrals", 00:05:26.622 "nvmf_discovery_remove_referral", 00:05:26.622 "nvmf_discovery_add_referral", 00:05:26.622 "nvmf_subsystem_remove_listener", 00:05:26.622 "nvmf_subsystem_add_listener", 00:05:26.622 "nvmf_delete_subsystem", 00:05:26.622 "nvmf_create_subsystem", 00:05:26.622 "nvmf_get_subsystems", 00:05:26.622 "env_dpdk_get_mem_stats", 00:05:26.622 "nbd_get_disks", 00:05:26.622 "nbd_stop_disk", 00:05:26.622 "nbd_start_disk", 00:05:26.622 "ublk_recover_disk", 00:05:26.622 "ublk_get_disks", 00:05:26.622 "ublk_stop_disk", 00:05:26.622 "ublk_start_disk", 00:05:26.622 "ublk_destroy_target", 00:05:26.622 "ublk_create_target", 00:05:26.622 "virtio_blk_create_transport", 00:05:26.622 "virtio_blk_get_transports", 00:05:26.622 "vhost_controller_set_coalescing", 00:05:26.622 "vhost_get_controllers", 00:05:26.622 "vhost_delete_controller", 00:05:26.622 "vhost_create_blk_controller", 00:05:26.622 "vhost_scsi_controller_remove_target", 00:05:26.622 "vhost_scsi_controller_add_target", 00:05:26.622 "vhost_start_scsi_controller", 00:05:26.622 "vhost_create_scsi_controller", 00:05:26.622 "thread_set_cpumask", 00:05:26.622 "framework_get_scheduler", 00:05:26.622 "framework_set_scheduler", 00:05:26.622 "framework_get_reactors", 00:05:26.622 "thread_get_io_channels", 00:05:26.622 "thread_get_pollers", 00:05:26.622 "thread_get_stats", 00:05:26.622 "framework_monitor_context_switch", 00:05:26.622 "spdk_kill_instance", 00:05:26.622 "log_enable_timestamps", 00:05:26.622 "log_get_flags", 00:05:26.622 "log_clear_flag", 00:05:26.622 "log_set_flag", 00:05:26.622 "log_get_level", 00:05:26.622 "log_set_level", 00:05:26.622 "log_get_print_level", 00:05:26.622 "log_set_print_level", 00:05:26.622 "framework_enable_cpumask_locks", 00:05:26.622 "framework_disable_cpumask_locks", 00:05:26.622 "framework_wait_init", 00:05:26.622 "framework_start_init", 00:05:26.622 "scsi_get_devices", 00:05:26.622 "bdev_get_histogram", 00:05:26.622 "bdev_enable_histogram", 00:05:26.622 "bdev_set_qos_limit", 00:05:26.622 "bdev_set_qd_sampling_period", 00:05:26.622 "bdev_get_bdevs", 00:05:26.622 "bdev_reset_iostat", 00:05:26.622 "bdev_get_iostat", 00:05:26.622 "bdev_examine", 00:05:26.622 "bdev_wait_for_examine", 00:05:26.622 "bdev_set_options", 00:05:26.622 "notify_get_notifications", 00:05:26.622 "notify_get_types", 00:05:26.622 "accel_get_stats", 00:05:26.622 "accel_set_options", 00:05:26.622 "accel_set_driver", 00:05:26.622 "accel_crypto_key_destroy", 00:05:26.622 "accel_crypto_keys_get", 00:05:26.622 "accel_crypto_key_create", 00:05:26.622 "accel_assign_opc", 00:05:26.622 "accel_get_module_info", 00:05:26.622 "accel_get_opc_assignments", 00:05:26.622 "vmd_rescan", 00:05:26.622 "vmd_remove_device", 00:05:26.622 "vmd_enable", 00:05:26.622 "sock_set_default_impl", 00:05:26.622 "sock_impl_set_options", 00:05:26.622 "sock_impl_get_options", 00:05:26.622 "iobuf_get_stats", 00:05:26.622 "iobuf_set_options", 00:05:26.622 "framework_get_pci_devices", 00:05:26.622 "framework_get_config", 00:05:26.622 "framework_get_subsystems", 00:05:26.622 "trace_get_info", 00:05:26.622 "trace_get_tpoint_group_mask", 00:05:26.622 "trace_disable_tpoint_group", 00:05:26.622 "trace_enable_tpoint_group", 00:05:26.622 "trace_clear_tpoint_mask", 00:05:26.622 "trace_set_tpoint_mask", 00:05:26.622 "spdk_get_version", 00:05:26.622 "rpc_get_methods" 00:05:26.622 ] 00:05:26.622 22:47:54 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:26.622 22:47:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:26.622 22:47:54 -- common/autotest_common.sh@10 -- # set +x 00:05:26.622 22:47:54 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:26.622 22:47:54 -- spdkcli/tcp.sh@38 -- # killprocess 3881380 00:05:26.622 22:47:54 -- common/autotest_common.sh@926 -- # '[' -z 3881380 ']' 00:05:26.622 22:47:54 -- common/autotest_common.sh@930 -- # kill -0 3881380 00:05:26.622 22:47:54 -- common/autotest_common.sh@931 -- # uname 00:05:26.622 22:47:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:26.622 22:47:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3881380 00:05:26.622 22:47:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:26.622 22:47:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:26.622 22:47:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3881380' 00:05:26.622 killing process with pid 3881380 00:05:26.622 22:47:54 -- common/autotest_common.sh@945 -- # kill 3881380 00:05:26.622 22:47:54 -- common/autotest_common.sh@950 -- # wait 3881380 00:05:26.884 00:05:26.884 real 0m1.356s 00:05:26.884 user 0m2.485s 00:05:26.884 sys 0m0.395s 00:05:26.884 22:47:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.884 22:47:54 -- common/autotest_common.sh@10 -- # set +x 00:05:26.884 ************************************ 00:05:26.884 END TEST spdkcli_tcp 00:05:26.884 ************************************ 00:05:26.884 22:47:54 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.884 22:47:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.884 22:47:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.884 22:47:54 -- common/autotest_common.sh@10 -- # set +x 00:05:26.884 ************************************ 00:05:26.884 START TEST dpdk_mem_utility 00:05:26.884 ************************************ 00:05:26.884 22:47:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.884 * Looking for test storage... 00:05:26.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:26.884 22:47:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:26.884 22:47:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3881779 00:05:26.884 22:47:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3881779 00:05:26.884 22:47:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.884 22:47:55 -- common/autotest_common.sh@819 -- # '[' -z 3881779 ']' 00:05:26.884 22:47:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.884 22:47:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:26.884 22:47:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.884 22:47:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:26.884 22:47:55 -- common/autotest_common.sh@10 -- # set +x 00:05:26.884 [2024-06-09 22:47:55.058274] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:26.884 [2024-06-09 22:47:55.058337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881779 ] 00:05:27.145 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.145 [2024-06-09 22:47:55.118379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.145 [2024-06-09 22:47:55.184046] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.145 [2024-06-09 22:47:55.184171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.718 22:47:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:27.718 22:47:55 -- common/autotest_common.sh@852 -- # return 0 00:05:27.718 22:47:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:27.718 22:47:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:27.718 22:47:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.718 22:47:55 -- common/autotest_common.sh@10 -- # set +x 00:05:27.718 { 00:05:27.718 "filename": "/tmp/spdk_mem_dump.txt" 00:05:27.718 } 00:05:27.718 22:47:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.718 22:47:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.718 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:27.718 1 heaps totaling size 814.000000 MiB 00:05:27.718 size: 814.000000 MiB heap id: 0 00:05:27.718 end heaps---------- 00:05:27.718 8 mempools totaling size 598.116089 MiB 00:05:27.718 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:27.718 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:27.718 size: 84.521057 MiB name: bdev_io_3881779 00:05:27.718 size: 51.011292 MiB name: evtpool_3881779 00:05:27.718 size: 50.003479 MiB name: msgpool_3881779 00:05:27.718 size: 21.763794 MiB name: PDU_Pool 00:05:27.718 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:27.718 size: 0.026123 MiB name: Session_Pool 00:05:27.718 end mempools------- 00:05:27.718 6 memzones totaling size 4.142822 MiB 00:05:27.718 size: 1.000366 MiB name: RG_ring_0_3881779 00:05:27.718 size: 1.000366 MiB name: RG_ring_1_3881779 00:05:27.718 size: 1.000366 MiB name: RG_ring_4_3881779 00:05:27.718 size: 1.000366 MiB name: RG_ring_5_3881779 00:05:27.718 size: 0.125366 MiB name: RG_ring_2_3881779 00:05:27.718 size: 0.015991 MiB name: RG_ring_3_3881779 00:05:27.718 end memzones------- 00:05:27.718 22:47:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:27.982 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:27.982 list of free elements. size: 12.519348 MiB 00:05:27.982 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:27.982 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:27.982 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:27.982 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:27.982 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:27.982 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:27.982 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:27.982 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:27.982 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:27.982 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:27.982 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:27.982 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:27.982 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:27.982 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:27.982 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:27.982 list of standard malloc elements. size: 199.218079 MiB 00:05:27.982 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:27.982 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:27.982 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:27.982 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:27.982 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:27.982 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:27.982 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:27.982 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:27.982 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:27.982 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:27.982 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:27.982 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:27.982 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:27.982 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:27.982 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:27.982 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:27.982 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:27.982 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:27.982 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:27.982 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:27.982 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:27.982 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:27.982 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:27.982 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:27.982 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:27.982 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:27.982 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:27.982 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:27.982 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:27.982 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:27.982 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:27.982 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:27.982 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:27.982 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:27.982 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:27.982 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:27.982 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:27.982 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:27.982 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:27.982 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:27.982 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:27.982 list of memzone associated elements. size: 602.262573 MiB 00:05:27.982 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:27.982 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:27.982 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:27.982 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:27.982 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:27.982 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3881779_0 00:05:27.982 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:27.982 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3881779_0 00:05:27.982 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:27.982 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3881779_0 00:05:27.982 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:27.982 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:27.982 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:27.982 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:27.982 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:27.982 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3881779 00:05:27.982 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:27.982 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3881779 00:05:27.982 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:27.982 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3881779 00:05:27.982 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:27.982 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:27.982 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:27.982 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:27.982 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:27.982 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:27.982 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:27.982 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:27.982 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:27.982 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3881779 00:05:27.982 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:27.982 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3881779 00:05:27.982 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:27.982 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3881779 00:05:27.982 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:27.982 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3881779 00:05:27.982 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:27.982 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3881779 00:05:27.982 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:27.982 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:27.982 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:27.982 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:27.982 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:27.982 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:27.982 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:27.982 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3881779 00:05:27.982 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:27.982 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:27.982 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:27.982 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:27.982 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:27.982 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3881779 00:05:27.982 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:27.982 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:27.982 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:27.982 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3881779 00:05:27.982 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:27.982 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3881779 00:05:27.982 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:27.982 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:27.982 22:47:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:27.982 22:47:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3881779 00:05:27.982 22:47:55 -- common/autotest_common.sh@926 -- # '[' -z 3881779 ']' 00:05:27.982 22:47:55 -- common/autotest_common.sh@930 -- # kill -0 3881779 00:05:27.982 22:47:55 -- common/autotest_common.sh@931 -- # uname 00:05:27.982 22:47:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:27.982 22:47:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3881779 00:05:27.982 22:47:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:27.982 22:47:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:27.982 22:47:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3881779' 00:05:27.982 killing process with pid 3881779 00:05:27.982 22:47:55 -- common/autotest_common.sh@945 -- # kill 3881779 00:05:27.982 22:47:55 -- common/autotest_common.sh@950 -- # wait 3881779 00:05:28.245 00:05:28.245 real 0m1.252s 00:05:28.245 user 0m1.297s 00:05:28.245 sys 0m0.372s 00:05:28.245 22:47:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.245 22:47:56 -- common/autotest_common.sh@10 -- # set +x 00:05:28.245 ************************************ 00:05:28.245 END TEST dpdk_mem_utility 00:05:28.245 ************************************ 00:05:28.245 22:47:56 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:28.245 22:47:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.245 22:47:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.245 22:47:56 -- common/autotest_common.sh@10 -- # set +x 00:05:28.245 ************************************ 00:05:28.245 START TEST event 00:05:28.245 ************************************ 00:05:28.245 22:47:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:28.245 * Looking for test storage... 00:05:28.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:28.245 22:47:56 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:28.245 22:47:56 -- bdev/nbd_common.sh@6 -- # set -e 00:05:28.245 22:47:56 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.245 22:47:56 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:28.245 22:47:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.245 22:47:56 -- common/autotest_common.sh@10 -- # set +x 00:05:28.245 ************************************ 00:05:28.245 START TEST event_perf 00:05:28.245 ************************************ 00:05:28.245 22:47:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.245 Running I/O for 1 seconds...[2024-06-09 22:47:56.328734] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:28.245 [2024-06-09 22:47:56.328849] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3882095 ] 00:05:28.245 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.245 [2024-06-09 22:47:56.395000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.506 [2024-06-09 22:47:56.468687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.506 [2024-06-09 22:47:56.468807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.506 [2024-06-09 22:47:56.468965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.506 [2024-06-09 22:47:56.468965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.449 Running I/O for 1 seconds... 00:05:29.449 lcore 0: 165359 00:05:29.449 lcore 1: 165359 00:05:29.449 lcore 2: 165355 00:05:29.449 lcore 3: 165358 00:05:29.449 done. 00:05:29.449 00:05:29.449 real 0m1.214s 00:05:29.449 user 0m4.137s 00:05:29.449 sys 0m0.076s 00:05:29.449 22:47:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.449 22:47:57 -- common/autotest_common.sh@10 -- # set +x 00:05:29.449 ************************************ 00:05:29.449 END TEST event_perf 00:05:29.449 ************************************ 00:05:29.449 22:47:57 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:29.449 22:47:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:29.449 22:47:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.449 22:47:57 -- common/autotest_common.sh@10 -- # set +x 00:05:29.449 ************************************ 00:05:29.449 START TEST event_reactor 00:05:29.449 ************************************ 00:05:29.449 22:47:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:29.449 [2024-06-09 22:47:57.586598] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:29.449 [2024-06-09 22:47:57.586707] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3882233 ] 00:05:29.449 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.710 [2024-06-09 22:47:57.649224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.710 [2024-06-09 22:47:57.713278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.653 test_start 00:05:30.653 oneshot 00:05:30.653 tick 100 00:05:30.653 tick 100 00:05:30.653 tick 250 00:05:30.653 tick 100 00:05:30.653 tick 100 00:05:30.653 tick 100 00:05:30.653 tick 250 00:05:30.653 tick 500 00:05:30.653 tick 100 00:05:30.653 tick 100 00:05:30.653 tick 250 00:05:30.653 tick 100 00:05:30.653 tick 100 00:05:30.653 test_end 00:05:30.653 00:05:30.653 real 0m1.199s 00:05:30.653 user 0m1.124s 00:05:30.653 sys 0m0.070s 00:05:30.653 22:47:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.653 22:47:58 -- common/autotest_common.sh@10 -- # set +x 00:05:30.653 ************************************ 00:05:30.653 END TEST event_reactor 00:05:30.653 ************************************ 00:05:30.653 22:47:58 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.653 22:47:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:30.653 22:47:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.653 22:47:58 -- common/autotest_common.sh@10 -- # set +x 00:05:30.653 ************************************ 00:05:30.653 START TEST event_reactor_perf 00:05:30.653 ************************************ 00:05:30.653 22:47:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.653 [2024-06-09 22:47:58.829656] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:30.653 [2024-06-09 22:47:58.829766] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3882560 ] 00:05:30.914 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.914 [2024-06-09 22:47:58.891811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.914 [2024-06-09 22:47:58.956335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.855 test_start 00:05:31.855 test_end 00:05:31.855 Performance: 367635 events per second 00:05:31.855 00:05:31.855 real 0m1.199s 00:05:31.855 user 0m1.132s 00:05:31.855 sys 0m0.063s 00:05:31.855 22:48:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.855 22:48:00 -- common/autotest_common.sh@10 -- # set +x 00:05:31.855 ************************************ 00:05:31.855 END TEST event_reactor_perf 00:05:31.855 ************************************ 00:05:32.115 22:48:00 -- event/event.sh@49 -- # uname -s 00:05:32.115 22:48:00 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:32.115 22:48:00 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.115 22:48:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.115 22:48:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.115 22:48:00 -- common/autotest_common.sh@10 -- # set +x 00:05:32.115 ************************************ 00:05:32.115 START TEST event_scheduler 00:05:32.115 ************************************ 00:05:32.115 22:48:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.115 * Looking for test storage... 00:05:32.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:32.115 22:48:00 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:32.115 22:48:00 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3882954 00:05:32.115 22:48:00 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.115 22:48:00 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:32.115 22:48:00 -- scheduler/scheduler.sh@37 -- # waitforlisten 3882954 00:05:32.115 22:48:00 -- common/autotest_common.sh@819 -- # '[' -z 3882954 ']' 00:05:32.115 22:48:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.115 22:48:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:32.115 22:48:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.115 22:48:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:32.115 22:48:00 -- common/autotest_common.sh@10 -- # set +x 00:05:32.115 [2024-06-09 22:48:00.186854] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:32.115 [2024-06-09 22:48:00.186915] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3882954 ] 00:05:32.115 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.115 [2024-06-09 22:48:00.239805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.375 [2024-06-09 22:48:00.300150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.375 [2024-06-09 22:48:00.300274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.375 [2024-06-09 22:48:00.300444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.375 [2024-06-09 22:48:00.300445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.947 22:48:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.947 22:48:00 -- common/autotest_common.sh@852 -- # return 0 00:05:32.947 22:48:00 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:32.947 22:48:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.947 22:48:00 -- common/autotest_common.sh@10 -- # set +x 00:05:32.947 POWER: Env isn't set yet! 00:05:32.947 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:32.947 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.947 POWER: Cannot set governor of lcore 0 to userspace 00:05:32.947 POWER: Attempting to initialise PSTAT power management... 00:05:32.947 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:32.947 POWER: Initialized successfully for lcore 0 power management 00:05:32.947 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:32.947 POWER: Initialized successfully for lcore 1 power management 00:05:32.947 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:32.947 POWER: Initialized successfully for lcore 2 power management 00:05:32.947 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:32.947 POWER: Initialized successfully for lcore 3 power management 00:05:32.947 22:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.947 22:48:01 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:32.947 22:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.947 22:48:01 -- common/autotest_common.sh@10 -- # set +x 00:05:32.947 [2024-06-09 22:48:01.073096] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:32.947 22:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.947 22:48:01 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:32.947 22:48:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.947 22:48:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.947 22:48:01 -- common/autotest_common.sh@10 -- # set +x 00:05:32.947 ************************************ 00:05:32.947 START TEST scheduler_create_thread 00:05:32.947 ************************************ 00:05:32.947 22:48:01 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:32.947 22:48:01 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:32.947 22:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.947 22:48:01 -- common/autotest_common.sh@10 -- # set +x 00:05:32.947 2 00:05:32.947 22:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.947 22:48:01 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:32.947 22:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.947 22:48:01 -- common/autotest_common.sh@10 -- # set +x 00:05:32.947 3 00:05:32.947 22:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.947 22:48:01 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:32.947 22:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.947 22:48:01 -- common/autotest_common.sh@10 -- # set +x 00:05:32.947 4 00:05:32.947 22:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:32.947 22:48:01 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:32.947 22:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:32.947 22:48:01 -- common/autotest_common.sh@10 -- # set +x 00:05:33.208 5 00:05:33.208 22:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.208 22:48:01 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:33.208 22:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.208 22:48:01 -- common/autotest_common.sh@10 -- # set +x 00:05:33.208 6 00:05:33.208 22:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.208 22:48:01 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:33.208 22:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.208 22:48:01 -- common/autotest_common.sh@10 -- # set +x 00:05:33.208 7 00:05:33.208 22:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.208 22:48:01 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:33.208 22:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.208 22:48:01 -- common/autotest_common.sh@10 -- # set +x 00:05:33.208 8 00:05:33.208 22:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.208 22:48:01 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:33.208 22:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.208 22:48:01 -- common/autotest_common.sh@10 -- # set +x 00:05:33.208 9 00:05:33.208 22:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:33.208 22:48:01 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:33.208 22:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:33.208 22:48:01 -- common/autotest_common.sh@10 -- # set +x 00:05:34.595 10 00:05:34.595 22:48:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:34.595 22:48:02 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:34.595 22:48:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:34.595 22:48:02 -- common/autotest_common.sh@10 -- # set +x 00:05:35.537 22:48:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:35.537 22:48:03 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:35.537 22:48:03 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:35.537 22:48:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.537 22:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:36.480 22:48:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:36.480 22:48:04 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:36.480 22:48:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:36.480 22:48:04 -- common/autotest_common.sh@10 -- # set +x 00:05:37.052 22:48:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:37.052 22:48:05 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:37.052 22:48:05 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:37.052 22:48:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:37.052 22:48:05 -- common/autotest_common.sh@10 -- # set +x 00:05:37.995 22:48:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:37.995 00:05:37.995 real 0m4.798s 00:05:37.995 user 0m0.023s 00:05:37.995 sys 0m0.008s 00:05:37.995 22:48:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.995 22:48:05 -- common/autotest_common.sh@10 -- # set +x 00:05:37.995 ************************************ 00:05:37.995 END TEST scheduler_create_thread 00:05:37.995 ************************************ 00:05:37.995 22:48:05 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:37.995 22:48:05 -- scheduler/scheduler.sh@46 -- # killprocess 3882954 00:05:37.995 22:48:05 -- common/autotest_common.sh@926 -- # '[' -z 3882954 ']' 00:05:37.995 22:48:05 -- common/autotest_common.sh@930 -- # kill -0 3882954 00:05:37.995 22:48:05 -- common/autotest_common.sh@931 -- # uname 00:05:37.995 22:48:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:37.995 22:48:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3882954 00:05:37.995 22:48:05 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:37.995 22:48:05 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:37.995 22:48:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3882954' 00:05:37.995 killing process with pid 3882954 00:05:37.995 22:48:05 -- common/autotest_common.sh@945 -- # kill 3882954 00:05:37.995 22:48:05 -- common/autotest_common.sh@950 -- # wait 3882954 00:05:37.995 [2024-06-09 22:48:06.159029] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:38.256 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:38.256 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:38.256 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:38.256 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:38.256 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:38.256 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:38.256 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:38.256 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:38.256 00:05:38.256 real 0m6.262s 00:05:38.256 user 0m14.109s 00:05:38.256 sys 0m0.317s 00:05:38.256 22:48:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.256 22:48:06 -- common/autotest_common.sh@10 -- # set +x 00:05:38.256 ************************************ 00:05:38.256 END TEST event_scheduler 00:05:38.256 ************************************ 00:05:38.256 22:48:06 -- event/event.sh@51 -- # modprobe -n nbd 00:05:38.256 22:48:06 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:38.256 22:48:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.256 22:48:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.256 22:48:06 -- common/autotest_common.sh@10 -- # set +x 00:05:38.256 ************************************ 00:05:38.256 START TEST app_repeat 00:05:38.256 ************************************ 00:05:38.256 22:48:06 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:38.256 22:48:06 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.256 22:48:06 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.256 22:48:06 -- event/event.sh@13 -- # local nbd_list 00:05:38.256 22:48:06 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.256 22:48:06 -- event/event.sh@14 -- # local bdev_list 00:05:38.256 22:48:06 -- event/event.sh@15 -- # local repeat_times=4 00:05:38.256 22:48:06 -- event/event.sh@17 -- # modprobe nbd 00:05:38.256 22:48:06 -- event/event.sh@19 -- # repeat_pid=3884341 00:05:38.256 22:48:06 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.256 22:48:06 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:38.256 22:48:06 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3884341' 00:05:38.256 Process app_repeat pid: 3884341 00:05:38.256 22:48:06 -- event/event.sh@23 -- # for i in {0..2} 00:05:38.256 22:48:06 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:38.256 spdk_app_start Round 0 00:05:38.256 22:48:06 -- event/event.sh@25 -- # waitforlisten 3884341 /var/tmp/spdk-nbd.sock 00:05:38.256 22:48:06 -- common/autotest_common.sh@819 -- # '[' -z 3884341 ']' 00:05:38.256 22:48:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.256 22:48:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:38.256 22:48:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.256 22:48:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:38.256 22:48:06 -- common/autotest_common.sh@10 -- # set +x 00:05:38.256 [2024-06-09 22:48:06.406036] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:38.256 [2024-06-09 22:48:06.406125] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884341 ] 00:05:38.256 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.518 [2024-06-09 22:48:06.468654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.518 [2024-06-09 22:48:06.537180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.518 [2024-06-09 22:48:06.537186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.088 22:48:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:39.088 22:48:07 -- common/autotest_common.sh@852 -- # return 0 00:05:39.088 22:48:07 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.349 Malloc0 00:05:39.349 22:48:07 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.349 Malloc1 00:05:39.349 22:48:07 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.349 22:48:07 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.349 22:48:07 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.349 22:48:07 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.349 22:48:07 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.349 22:48:07 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.349 22:48:07 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.349 22:48:07 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.610 22:48:07 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.610 22:48:07 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.610 22:48:07 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.610 22:48:07 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.610 22:48:07 -- bdev/nbd_common.sh@12 -- # local i 00:05:39.610 22:48:07 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.610 22:48:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.611 22:48:07 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.611 /dev/nbd0 00:05:39.611 22:48:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.611 22:48:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.611 22:48:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:39.611 22:48:07 -- common/autotest_common.sh@857 -- # local i 00:05:39.611 22:48:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:39.611 22:48:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:39.611 22:48:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:39.611 22:48:07 -- common/autotest_common.sh@861 -- # break 00:05:39.611 22:48:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:39.611 22:48:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:39.611 22:48:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.611 1+0 records in 00:05:39.611 1+0 records out 00:05:39.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255611 s, 16.0 MB/s 00:05:39.611 22:48:07 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.611 22:48:07 -- common/autotest_common.sh@874 -- # size=4096 00:05:39.611 22:48:07 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.611 22:48:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:39.611 22:48:07 -- common/autotest_common.sh@877 -- # return 0 00:05:39.611 22:48:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.611 22:48:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.611 22:48:07 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.871 /dev/nbd1 00:05:39.871 22:48:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.871 22:48:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.871 22:48:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:39.871 22:48:07 -- common/autotest_common.sh@857 -- # local i 00:05:39.871 22:48:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:39.871 22:48:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:39.871 22:48:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:39.871 22:48:07 -- common/autotest_common.sh@861 -- # break 00:05:39.871 22:48:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:39.871 22:48:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:39.871 22:48:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.871 1+0 records in 00:05:39.871 1+0 records out 00:05:39.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200864 s, 20.4 MB/s 00:05:39.871 22:48:07 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.871 22:48:07 -- common/autotest_common.sh@874 -- # size=4096 00:05:39.871 22:48:07 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:39.871 22:48:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:39.871 22:48:07 -- common/autotest_common.sh@877 -- # return 0 00:05:39.871 22:48:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.871 22:48:07 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.871 22:48:07 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.871 22:48:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.871 22:48:07 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.131 { 00:05:40.131 "nbd_device": "/dev/nbd0", 00:05:40.131 "bdev_name": "Malloc0" 00:05:40.131 }, 00:05:40.131 { 00:05:40.131 "nbd_device": "/dev/nbd1", 00:05:40.131 "bdev_name": "Malloc1" 00:05:40.131 } 00:05:40.131 ]' 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.131 { 00:05:40.131 "nbd_device": "/dev/nbd0", 00:05:40.131 "bdev_name": "Malloc0" 00:05:40.131 }, 00:05:40.131 { 00:05:40.131 "nbd_device": "/dev/nbd1", 00:05:40.131 "bdev_name": "Malloc1" 00:05:40.131 } 00:05:40.131 ]' 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.131 /dev/nbd1' 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.131 /dev/nbd1' 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.131 256+0 records in 00:05:40.131 256+0 records out 00:05:40.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116913 s, 89.7 MB/s 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.131 256+0 records in 00:05:40.131 256+0 records out 00:05:40.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170479 s, 61.5 MB/s 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.131 256+0 records in 00:05:40.131 256+0 records out 00:05:40.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169133 s, 62.0 MB/s 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@51 -- # local i 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.131 22:48:08 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@41 -- # break 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@41 -- # break 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.392 22:48:08 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.653 22:48:08 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.653 22:48:08 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.653 22:48:08 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.653 22:48:08 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.653 22:48:08 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.653 22:48:08 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.653 22:48:08 -- bdev/nbd_common.sh@65 -- # true 00:05:40.653 22:48:08 -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.653 22:48:08 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.653 22:48:08 -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.653 22:48:08 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.653 22:48:08 -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.653 22:48:08 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.914 22:48:08 -- event/event.sh@35 -- # sleep 3 00:05:40.914 [2024-06-09 22:48:09.028842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.914 [2024-06-09 22:48:09.090395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.914 [2024-06-09 22:48:09.090407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.200 [2024-06-09 22:48:09.122054] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.200 [2024-06-09 22:48:09.122088] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.810 22:48:11 -- event/event.sh@23 -- # for i in {0..2} 00:05:43.810 22:48:11 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:43.810 spdk_app_start Round 1 00:05:43.810 22:48:11 -- event/event.sh@25 -- # waitforlisten 3884341 /var/tmp/spdk-nbd.sock 00:05:43.810 22:48:11 -- common/autotest_common.sh@819 -- # '[' -z 3884341 ']' 00:05:43.811 22:48:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.811 22:48:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.811 22:48:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.811 22:48:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.811 22:48:11 -- common/autotest_common.sh@10 -- # set +x 00:05:44.072 22:48:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.072 22:48:12 -- common/autotest_common.sh@852 -- # return 0 00:05:44.072 22:48:12 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.072 Malloc0 00:05:44.072 22:48:12 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.333 Malloc1 00:05:44.333 22:48:12 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.333 22:48:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.333 22:48:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.333 22:48:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.333 22:48:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.333 22:48:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.333 22:48:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.333 22:48:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.333 22:48:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.333 22:48:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.333 22:48:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.333 22:48:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.333 22:48:12 -- bdev/nbd_common.sh@12 -- # local i 00:05:44.333 22:48:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.333 22:48:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.333 22:48:12 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.333 /dev/nbd0 00:05:44.595 22:48:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.595 22:48:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.595 22:48:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:44.595 22:48:12 -- common/autotest_common.sh@857 -- # local i 00:05:44.595 22:48:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:44.595 22:48:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:44.595 22:48:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:44.595 22:48:12 -- common/autotest_common.sh@861 -- # break 00:05:44.595 22:48:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:44.595 22:48:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:44.595 22:48:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.595 1+0 records in 00:05:44.595 1+0 records out 00:05:44.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304223 s, 13.5 MB/s 00:05:44.595 22:48:12 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.595 22:48:12 -- common/autotest_common.sh@874 -- # size=4096 00:05:44.595 22:48:12 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.595 22:48:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:44.595 22:48:12 -- common/autotest_common.sh@877 -- # return 0 00:05:44.595 22:48:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.595 22:48:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.595 22:48:12 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.595 /dev/nbd1 00:05:44.595 22:48:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.595 22:48:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.595 22:48:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:44.595 22:48:12 -- common/autotest_common.sh@857 -- # local i 00:05:44.595 22:48:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:44.595 22:48:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:44.595 22:48:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:44.595 22:48:12 -- common/autotest_common.sh@861 -- # break 00:05:44.595 22:48:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:44.595 22:48:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:44.595 22:48:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.595 1+0 records in 00:05:44.595 1+0 records out 00:05:44.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312794 s, 13.1 MB/s 00:05:44.595 22:48:12 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.595 22:48:12 -- common/autotest_common.sh@874 -- # size=4096 00:05:44.595 22:48:12 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:44.595 22:48:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:44.595 22:48:12 -- common/autotest_common.sh@877 -- # return 0 00:05:44.595 22:48:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.595 22:48:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.595 22:48:12 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.595 22:48:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.595 22:48:12 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.857 { 00:05:44.857 "nbd_device": "/dev/nbd0", 00:05:44.857 "bdev_name": "Malloc0" 00:05:44.857 }, 00:05:44.857 { 00:05:44.857 "nbd_device": "/dev/nbd1", 00:05:44.857 "bdev_name": "Malloc1" 00:05:44.857 } 00:05:44.857 ]' 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.857 { 00:05:44.857 "nbd_device": "/dev/nbd0", 00:05:44.857 "bdev_name": "Malloc0" 00:05:44.857 }, 00:05:44.857 { 00:05:44.857 "nbd_device": "/dev/nbd1", 00:05:44.857 "bdev_name": "Malloc1" 00:05:44.857 } 00:05:44.857 ]' 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.857 /dev/nbd1' 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.857 /dev/nbd1' 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.857 256+0 records in 00:05:44.857 256+0 records out 00:05:44.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115831 s, 90.5 MB/s 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.857 256+0 records in 00:05:44.857 256+0 records out 00:05:44.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0157972 s, 66.4 MB/s 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.857 256+0 records in 00:05:44.857 256+0 records out 00:05:44.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187107 s, 56.0 MB/s 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@51 -- # local i 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.857 22:48:12 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.119 22:48:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.119 22:48:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.119 22:48:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.119 22:48:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.119 22:48:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.119 22:48:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.119 22:48:13 -- bdev/nbd_common.sh@41 -- # break 00:05:45.119 22:48:13 -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.119 22:48:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.119 22:48:13 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@41 -- # break 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@65 -- # true 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.381 22:48:13 -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.381 22:48:13 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.642 22:48:13 -- event/event.sh@35 -- # sleep 3 00:05:45.903 [2024-06-09 22:48:13.837042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.904 [2024-06-09 22:48:13.898632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.904 [2024-06-09 22:48:13.898722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.904 [2024-06-09 22:48:13.930288] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.904 [2024-06-09 22:48:13.930323] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.210 22:48:16 -- event/event.sh@23 -- # for i in {0..2} 00:05:49.210 22:48:16 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:49.210 spdk_app_start Round 2 00:05:49.210 22:48:16 -- event/event.sh@25 -- # waitforlisten 3884341 /var/tmp/spdk-nbd.sock 00:05:49.210 22:48:16 -- common/autotest_common.sh@819 -- # '[' -z 3884341 ']' 00:05:49.210 22:48:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.210 22:48:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:49.210 22:48:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.210 22:48:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:49.210 22:48:16 -- common/autotest_common.sh@10 -- # set +x 00:05:49.210 22:48:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.210 22:48:16 -- common/autotest_common.sh@852 -- # return 0 00:05:49.210 22:48:16 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.210 Malloc0 00:05:49.210 22:48:17 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.210 Malloc1 00:05:49.210 22:48:17 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@12 -- # local i 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.210 /dev/nbd0 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.210 22:48:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:49.210 22:48:17 -- common/autotest_common.sh@857 -- # local i 00:05:49.210 22:48:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:49.210 22:48:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:49.210 22:48:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:49.210 22:48:17 -- common/autotest_common.sh@861 -- # break 00:05:49.210 22:48:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:49.210 22:48:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:49.210 22:48:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.210 1+0 records in 00:05:49.210 1+0 records out 00:05:49.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210216 s, 19.5 MB/s 00:05:49.210 22:48:17 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.210 22:48:17 -- common/autotest_common.sh@874 -- # size=4096 00:05:49.210 22:48:17 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.210 22:48:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:49.210 22:48:17 -- common/autotest_common.sh@877 -- # return 0 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.210 22:48:17 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.471 /dev/nbd1 00:05:49.471 22:48:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.471 22:48:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.471 22:48:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:49.471 22:48:17 -- common/autotest_common.sh@857 -- # local i 00:05:49.471 22:48:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:49.471 22:48:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:49.471 22:48:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:49.471 22:48:17 -- common/autotest_common.sh@861 -- # break 00:05:49.471 22:48:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:49.471 22:48:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:49.471 22:48:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.471 1+0 records in 00:05:49.471 1+0 records out 00:05:49.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275676 s, 14.9 MB/s 00:05:49.471 22:48:17 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.471 22:48:17 -- common/autotest_common.sh@874 -- # size=4096 00:05:49.471 22:48:17 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:49.471 22:48:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:49.471 22:48:17 -- common/autotest_common.sh@877 -- # return 0 00:05:49.471 22:48:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.471 22:48:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.471 22:48:17 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.471 22:48:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.471 22:48:17 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.733 { 00:05:49.733 "nbd_device": "/dev/nbd0", 00:05:49.733 "bdev_name": "Malloc0" 00:05:49.733 }, 00:05:49.733 { 00:05:49.733 "nbd_device": "/dev/nbd1", 00:05:49.733 "bdev_name": "Malloc1" 00:05:49.733 } 00:05:49.733 ]' 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.733 { 00:05:49.733 "nbd_device": "/dev/nbd0", 00:05:49.733 "bdev_name": "Malloc0" 00:05:49.733 }, 00:05:49.733 { 00:05:49.733 "nbd_device": "/dev/nbd1", 00:05:49.733 "bdev_name": "Malloc1" 00:05:49.733 } 00:05:49.733 ]' 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.733 /dev/nbd1' 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.733 /dev/nbd1' 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.733 256+0 records in 00:05:49.733 256+0 records out 00:05:49.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114103 s, 91.9 MB/s 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.733 256+0 records in 00:05:49.733 256+0 records out 00:05:49.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154979 s, 67.7 MB/s 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.733 256+0 records in 00:05:49.733 256+0 records out 00:05:49.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0180337 s, 58.1 MB/s 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@51 -- # local i 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.733 22:48:17 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.995 22:48:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.995 22:48:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.995 22:48:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.995 22:48:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.995 22:48:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.995 22:48:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.995 22:48:17 -- bdev/nbd_common.sh@41 -- # break 00:05:49.995 22:48:17 -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.995 22:48:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.995 22:48:17 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.995 22:48:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.995 22:48:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.995 22:48:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.995 22:48:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.995 22:48:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.995 22:48:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.995 22:48:18 -- bdev/nbd_common.sh@41 -- # break 00:05:49.995 22:48:18 -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.995 22:48:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.995 22:48:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.995 22:48:18 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.256 22:48:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.256 22:48:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.256 22:48:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.256 22:48:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.256 22:48:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.256 22:48:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.256 22:48:18 -- bdev/nbd_common.sh@65 -- # true 00:05:50.256 22:48:18 -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.256 22:48:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.256 22:48:18 -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.256 22:48:18 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.256 22:48:18 -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.256 22:48:18 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.517 22:48:18 -- event/event.sh@35 -- # sleep 3 00:05:50.517 [2024-06-09 22:48:18.647367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.778 [2024-06-09 22:48:18.708857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.778 [2024-06-09 22:48:18.708864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.778 [2024-06-09 22:48:18.740405] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.778 [2024-06-09 22:48:18.740451] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.080 22:48:21 -- event/event.sh@38 -- # waitforlisten 3884341 /var/tmp/spdk-nbd.sock 00:05:54.080 22:48:21 -- common/autotest_common.sh@819 -- # '[' -z 3884341 ']' 00:05:54.080 22:48:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.080 22:48:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:54.080 22:48:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.080 22:48:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:54.080 22:48:21 -- common/autotest_common.sh@10 -- # set +x 00:05:54.080 22:48:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:54.080 22:48:21 -- common/autotest_common.sh@852 -- # return 0 00:05:54.080 22:48:21 -- event/event.sh@39 -- # killprocess 3884341 00:05:54.080 22:48:21 -- common/autotest_common.sh@926 -- # '[' -z 3884341 ']' 00:05:54.080 22:48:21 -- common/autotest_common.sh@930 -- # kill -0 3884341 00:05:54.080 22:48:21 -- common/autotest_common.sh@931 -- # uname 00:05:54.080 22:48:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:54.080 22:48:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3884341 00:05:54.080 22:48:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:54.080 22:48:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:54.080 22:48:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3884341' 00:05:54.080 killing process with pid 3884341 00:05:54.080 22:48:21 -- common/autotest_common.sh@945 -- # kill 3884341 00:05:54.080 22:48:21 -- common/autotest_common.sh@950 -- # wait 3884341 00:05:54.080 spdk_app_start is called in Round 0. 00:05:54.080 Shutdown signal received, stop current app iteration 00:05:54.081 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:05:54.081 spdk_app_start is called in Round 1. 00:05:54.081 Shutdown signal received, stop current app iteration 00:05:54.081 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:05:54.081 spdk_app_start is called in Round 2. 00:05:54.081 Shutdown signal received, stop current app iteration 00:05:54.081 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:05:54.081 spdk_app_start is called in Round 3. 00:05:54.081 Shutdown signal received, stop current app iteration 00:05:54.081 22:48:21 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:54.081 22:48:21 -- event/event.sh@42 -- # return 0 00:05:54.081 00:05:54.081 real 0m15.463s 00:05:54.081 user 0m33.206s 00:05:54.081 sys 0m2.126s 00:05:54.081 22:48:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.081 22:48:21 -- common/autotest_common.sh@10 -- # set +x 00:05:54.081 ************************************ 00:05:54.081 END TEST app_repeat 00:05:54.081 ************************************ 00:05:54.081 22:48:21 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:54.081 22:48:21 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:54.081 22:48:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:54.081 22:48:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.081 22:48:21 -- common/autotest_common.sh@10 -- # set +x 00:05:54.081 ************************************ 00:05:54.081 START TEST cpu_locks 00:05:54.081 ************************************ 00:05:54.081 22:48:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:54.081 * Looking for test storage... 00:05:54.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:54.081 22:48:21 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:54.081 22:48:21 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:54.081 22:48:21 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:54.081 22:48:21 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:54.081 22:48:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:54.081 22:48:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.081 22:48:21 -- common/autotest_common.sh@10 -- # set +x 00:05:54.081 ************************************ 00:05:54.081 START TEST default_locks 00:05:54.081 ************************************ 00:05:54.081 22:48:21 -- common/autotest_common.sh@1104 -- # default_locks 00:05:54.081 22:48:21 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3888200 00:05:54.081 22:48:21 -- event/cpu_locks.sh@47 -- # waitforlisten 3888200 00:05:54.081 22:48:21 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.081 22:48:21 -- common/autotest_common.sh@819 -- # '[' -z 3888200 ']' 00:05:54.081 22:48:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.081 22:48:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:54.081 22:48:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.081 22:48:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:54.081 22:48:21 -- common/autotest_common.sh@10 -- # set +x 00:05:54.081 [2024-06-09 22:48:22.037320] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:54.081 [2024-06-09 22:48:22.037413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888200 ] 00:05:54.081 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.081 [2024-06-09 22:48:22.100977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.081 [2024-06-09 22:48:22.176218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.081 [2024-06-09 22:48:22.176352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.652 22:48:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:54.652 22:48:22 -- common/autotest_common.sh@852 -- # return 0 00:05:54.652 22:48:22 -- event/cpu_locks.sh@49 -- # locks_exist 3888200 00:05:54.652 22:48:22 -- event/cpu_locks.sh@22 -- # lslocks -p 3888200 00:05:54.652 22:48:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.223 lslocks: write error 00:05:55.223 22:48:23 -- event/cpu_locks.sh@50 -- # killprocess 3888200 00:05:55.223 22:48:23 -- common/autotest_common.sh@926 -- # '[' -z 3888200 ']' 00:05:55.223 22:48:23 -- common/autotest_common.sh@930 -- # kill -0 3888200 00:05:55.223 22:48:23 -- common/autotest_common.sh@931 -- # uname 00:05:55.223 22:48:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:55.223 22:48:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3888200 00:05:55.223 22:48:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:55.223 22:48:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:55.223 22:48:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3888200' 00:05:55.223 killing process with pid 3888200 00:05:55.223 22:48:23 -- common/autotest_common.sh@945 -- # kill 3888200 00:05:55.223 22:48:23 -- common/autotest_common.sh@950 -- # wait 3888200 00:05:55.483 22:48:23 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3888200 00:05:55.483 22:48:23 -- common/autotest_common.sh@640 -- # local es=0 00:05:55.483 22:48:23 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3888200 00:05:55.483 22:48:23 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:55.483 22:48:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:55.484 22:48:23 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:55.484 22:48:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:55.484 22:48:23 -- common/autotest_common.sh@643 -- # waitforlisten 3888200 00:05:55.484 22:48:23 -- common/autotest_common.sh@819 -- # '[' -z 3888200 ']' 00:05:55.484 22:48:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.484 22:48:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.484 22:48:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.484 22:48:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.484 22:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:55.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3888200) - No such process 00:05:55.484 ERROR: process (pid: 3888200) is no longer running 00:05:55.484 22:48:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.484 22:48:23 -- common/autotest_common.sh@852 -- # return 1 00:05:55.484 22:48:23 -- common/autotest_common.sh@643 -- # es=1 00:05:55.484 22:48:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:55.484 22:48:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:55.484 22:48:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:55.484 22:48:23 -- event/cpu_locks.sh@54 -- # no_locks 00:05:55.484 22:48:23 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.484 22:48:23 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.484 22:48:23 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.484 00:05:55.484 real 0m1.488s 00:05:55.484 user 0m1.575s 00:05:55.484 sys 0m0.495s 00:05:55.484 22:48:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.484 22:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:55.484 ************************************ 00:05:55.484 END TEST default_locks 00:05:55.484 ************************************ 00:05:55.484 22:48:23 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:55.484 22:48:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.484 22:48:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.484 22:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:55.484 ************************************ 00:05:55.484 START TEST default_locks_via_rpc 00:05:55.484 ************************************ 00:05:55.484 22:48:23 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:55.484 22:48:23 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3888564 00:05:55.484 22:48:23 -- event/cpu_locks.sh@63 -- # waitforlisten 3888564 00:05:55.484 22:48:23 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.484 22:48:23 -- common/autotest_common.sh@819 -- # '[' -z 3888564 ']' 00:05:55.484 22:48:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.484 22:48:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.484 22:48:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.484 22:48:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.484 22:48:23 -- common/autotest_common.sh@10 -- # set +x 00:05:55.484 [2024-06-09 22:48:23.556936] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:55.484 [2024-06-09 22:48:23.556992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888564 ] 00:05:55.484 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.484 [2024-06-09 22:48:23.615059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.744 [2024-06-09 22:48:23.676983] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.744 [2024-06-09 22:48:23.677107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.315 22:48:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:56.315 22:48:24 -- common/autotest_common.sh@852 -- # return 0 00:05:56.315 22:48:24 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:56.315 22:48:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.315 22:48:24 -- common/autotest_common.sh@10 -- # set +x 00:05:56.315 22:48:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.315 22:48:24 -- event/cpu_locks.sh@67 -- # no_locks 00:05:56.315 22:48:24 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.315 22:48:24 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.315 22:48:24 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.315 22:48:24 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:56.315 22:48:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.315 22:48:24 -- common/autotest_common.sh@10 -- # set +x 00:05:56.315 22:48:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.315 22:48:24 -- event/cpu_locks.sh@71 -- # locks_exist 3888564 00:05:56.315 22:48:24 -- event/cpu_locks.sh@22 -- # lslocks -p 3888564 00:05:56.315 22:48:24 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.576 22:48:24 -- event/cpu_locks.sh@73 -- # killprocess 3888564 00:05:56.576 22:48:24 -- common/autotest_common.sh@926 -- # '[' -z 3888564 ']' 00:05:56.576 22:48:24 -- common/autotest_common.sh@930 -- # kill -0 3888564 00:05:56.576 22:48:24 -- common/autotest_common.sh@931 -- # uname 00:05:56.576 22:48:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:56.576 22:48:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3888564 00:05:56.576 22:48:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:56.576 22:48:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:56.576 22:48:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3888564' 00:05:56.576 killing process with pid 3888564 00:05:56.576 22:48:24 -- common/autotest_common.sh@945 -- # kill 3888564 00:05:56.576 22:48:24 -- common/autotest_common.sh@950 -- # wait 3888564 00:05:56.836 00:05:56.836 real 0m1.425s 00:05:56.836 user 0m1.504s 00:05:56.836 sys 0m0.470s 00:05:56.836 22:48:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.836 22:48:24 -- common/autotest_common.sh@10 -- # set +x 00:05:56.836 ************************************ 00:05:56.836 END TEST default_locks_via_rpc 00:05:56.836 ************************************ 00:05:56.836 22:48:24 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:56.836 22:48:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.836 22:48:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.836 22:48:24 -- common/autotest_common.sh@10 -- # set +x 00:05:56.836 ************************************ 00:05:56.836 START TEST non_locking_app_on_locked_coremask 00:05:56.836 ************************************ 00:05:56.836 22:48:24 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:56.836 22:48:24 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3888861 00:05:56.836 22:48:24 -- event/cpu_locks.sh@81 -- # waitforlisten 3888861 /var/tmp/spdk.sock 00:05:56.836 22:48:24 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.836 22:48:24 -- common/autotest_common.sh@819 -- # '[' -z 3888861 ']' 00:05:56.836 22:48:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.836 22:48:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.836 22:48:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.836 22:48:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.836 22:48:24 -- common/autotest_common.sh@10 -- # set +x 00:05:57.096 [2024-06-09 22:48:25.028586] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:57.096 [2024-06-09 22:48:25.028646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888861 ] 00:05:57.096 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.096 [2024-06-09 22:48:25.087027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.096 [2024-06-09 22:48:25.151528] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.096 [2024-06-09 22:48:25.151658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.720 22:48:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.720 22:48:25 -- common/autotest_common.sh@852 -- # return 0 00:05:57.720 22:48:25 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:57.720 22:48:25 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3888948 00:05:57.720 22:48:25 -- event/cpu_locks.sh@85 -- # waitforlisten 3888948 /var/tmp/spdk2.sock 00:05:57.720 22:48:25 -- common/autotest_common.sh@819 -- # '[' -z 3888948 ']' 00:05:57.720 22:48:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.720 22:48:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:57.720 22:48:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.720 22:48:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:57.720 22:48:25 -- common/autotest_common.sh@10 -- # set +x 00:05:57.720 [2024-06-09 22:48:25.806622] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:57.721 [2024-06-09 22:48:25.806673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888948 ] 00:05:57.721 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.721 [2024-06-09 22:48:25.897577] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.721 [2024-06-09 22:48:25.897606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.992 [2024-06-09 22:48:26.024468] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.992 [2024-06-09 22:48:26.024593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.618 22:48:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:58.618 22:48:26 -- common/autotest_common.sh@852 -- # return 0 00:05:58.618 22:48:26 -- event/cpu_locks.sh@87 -- # locks_exist 3888861 00:05:58.618 22:48:26 -- event/cpu_locks.sh@22 -- # lslocks -p 3888861 00:05:58.618 22:48:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.878 lslocks: write error 00:05:58.878 22:48:27 -- event/cpu_locks.sh@89 -- # killprocess 3888861 00:05:58.878 22:48:27 -- common/autotest_common.sh@926 -- # '[' -z 3888861 ']' 00:05:58.878 22:48:27 -- common/autotest_common.sh@930 -- # kill -0 3888861 00:05:58.879 22:48:27 -- common/autotest_common.sh@931 -- # uname 00:05:58.879 22:48:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:59.140 22:48:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3888861 00:05:59.140 22:48:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:59.140 22:48:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:59.140 22:48:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3888861' 00:05:59.140 killing process with pid 3888861 00:05:59.140 22:48:27 -- common/autotest_common.sh@945 -- # kill 3888861 00:05:59.140 22:48:27 -- common/autotest_common.sh@950 -- # wait 3888861 00:05:59.402 22:48:27 -- event/cpu_locks.sh@90 -- # killprocess 3888948 00:05:59.402 22:48:27 -- common/autotest_common.sh@926 -- # '[' -z 3888948 ']' 00:05:59.402 22:48:27 -- common/autotest_common.sh@930 -- # kill -0 3888948 00:05:59.402 22:48:27 -- common/autotest_common.sh@931 -- # uname 00:05:59.402 22:48:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:59.402 22:48:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3888948 00:05:59.402 22:48:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:59.402 22:48:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:59.402 22:48:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3888948' 00:05:59.402 killing process with pid 3888948 00:05:59.402 22:48:27 -- common/autotest_common.sh@945 -- # kill 3888948 00:05:59.402 22:48:27 -- common/autotest_common.sh@950 -- # wait 3888948 00:05:59.662 00:05:59.662 real 0m2.795s 00:05:59.662 user 0m3.026s 00:05:59.662 sys 0m0.827s 00:05:59.662 22:48:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.662 22:48:27 -- common/autotest_common.sh@10 -- # set +x 00:05:59.662 ************************************ 00:05:59.662 END TEST non_locking_app_on_locked_coremask 00:05:59.662 ************************************ 00:05:59.662 22:48:27 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:59.662 22:48:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:59.662 22:48:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.662 22:48:27 -- common/autotest_common.sh@10 -- # set +x 00:05:59.662 ************************************ 00:05:59.662 START TEST locking_app_on_unlocked_coremask 00:05:59.662 ************************************ 00:05:59.662 22:48:27 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:59.662 22:48:27 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3889339 00:05:59.662 22:48:27 -- event/cpu_locks.sh@99 -- # waitforlisten 3889339 /var/tmp/spdk.sock 00:05:59.662 22:48:27 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:59.662 22:48:27 -- common/autotest_common.sh@819 -- # '[' -z 3889339 ']' 00:05:59.662 22:48:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.662 22:48:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:59.662 22:48:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.662 22:48:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:59.662 22:48:27 -- common/autotest_common.sh@10 -- # set +x 00:05:59.922 [2024-06-09 22:48:27.865430] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:05:59.922 [2024-06-09 22:48:27.865487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889339 ] 00:05:59.922 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.922 [2024-06-09 22:48:27.925264] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.922 [2024-06-09 22:48:27.925298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.922 [2024-06-09 22:48:27.987632] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.922 [2024-06-09 22:48:27.987771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.493 22:48:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:00.493 22:48:28 -- common/autotest_common.sh@852 -- # return 0 00:06:00.493 22:48:28 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:00.493 22:48:28 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3889658 00:06:00.493 22:48:28 -- event/cpu_locks.sh@103 -- # waitforlisten 3889658 /var/tmp/spdk2.sock 00:06:00.493 22:48:28 -- common/autotest_common.sh@819 -- # '[' -z 3889658 ']' 00:06:00.493 22:48:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.493 22:48:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:00.493 22:48:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.493 22:48:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:00.493 22:48:28 -- common/autotest_common.sh@10 -- # set +x 00:06:00.493 [2024-06-09 22:48:28.654235] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:00.493 [2024-06-09 22:48:28.654286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889658 ] 00:06:00.753 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.753 [2024-06-09 22:48:28.742563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.753 [2024-06-09 22:48:28.869723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:00.753 [2024-06-09 22:48:28.869856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.324 22:48:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.324 22:48:29 -- common/autotest_common.sh@852 -- # return 0 00:06:01.324 22:48:29 -- event/cpu_locks.sh@105 -- # locks_exist 3889658 00:06:01.324 22:48:29 -- event/cpu_locks.sh@22 -- # lslocks -p 3889658 00:06:01.324 22:48:29 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.896 lslocks: write error 00:06:01.896 22:48:29 -- event/cpu_locks.sh@107 -- # killprocess 3889339 00:06:01.896 22:48:29 -- common/autotest_common.sh@926 -- # '[' -z 3889339 ']' 00:06:01.896 22:48:29 -- common/autotest_common.sh@930 -- # kill -0 3889339 00:06:01.896 22:48:29 -- common/autotest_common.sh@931 -- # uname 00:06:01.896 22:48:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:01.896 22:48:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3889339 00:06:01.896 22:48:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:01.896 22:48:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:01.896 22:48:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3889339' 00:06:01.896 killing process with pid 3889339 00:06:01.896 22:48:29 -- common/autotest_common.sh@945 -- # kill 3889339 00:06:01.896 22:48:29 -- common/autotest_common.sh@950 -- # wait 3889339 00:06:02.466 22:48:30 -- event/cpu_locks.sh@108 -- # killprocess 3889658 00:06:02.466 22:48:30 -- common/autotest_common.sh@926 -- # '[' -z 3889658 ']' 00:06:02.466 22:48:30 -- common/autotest_common.sh@930 -- # kill -0 3889658 00:06:02.466 22:48:30 -- common/autotest_common.sh@931 -- # uname 00:06:02.466 22:48:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:02.466 22:48:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3889658 00:06:02.466 22:48:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:02.466 22:48:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:02.467 22:48:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3889658' 00:06:02.467 killing process with pid 3889658 00:06:02.467 22:48:30 -- common/autotest_common.sh@945 -- # kill 3889658 00:06:02.467 22:48:30 -- common/autotest_common.sh@950 -- # wait 3889658 00:06:02.467 00:06:02.467 real 0m2.821s 00:06:02.467 user 0m3.047s 00:06:02.467 sys 0m0.834s 00:06:02.467 22:48:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.467 22:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:02.467 ************************************ 00:06:02.467 END TEST locking_app_on_unlocked_coremask 00:06:02.467 ************************************ 00:06:02.727 22:48:30 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:02.727 22:48:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:02.727 22:48:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.727 22:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:02.727 ************************************ 00:06:02.727 START TEST locking_app_on_locked_coremask 00:06:02.727 ************************************ 00:06:02.727 22:48:30 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:02.727 22:48:30 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3890033 00:06:02.727 22:48:30 -- event/cpu_locks.sh@116 -- # waitforlisten 3890033 /var/tmp/spdk.sock 00:06:02.727 22:48:30 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.727 22:48:30 -- common/autotest_common.sh@819 -- # '[' -z 3890033 ']' 00:06:02.727 22:48:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.727 22:48:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:02.727 22:48:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.727 22:48:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:02.727 22:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:02.727 [2024-06-09 22:48:30.730937] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:02.727 [2024-06-09 22:48:30.730998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890033 ] 00:06:02.728 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.728 [2024-06-09 22:48:30.789082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.728 [2024-06-09 22:48:30.852340] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.728 [2024-06-09 22:48:30.852473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.669 22:48:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.669 22:48:31 -- common/autotest_common.sh@852 -- # return 0 00:06:03.669 22:48:31 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3890183 00:06:03.669 22:48:31 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3890183 /var/tmp/spdk2.sock 00:06:03.669 22:48:31 -- common/autotest_common.sh@640 -- # local es=0 00:06:03.669 22:48:31 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.670 22:48:31 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3890183 /var/tmp/spdk2.sock 00:06:03.670 22:48:31 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:03.670 22:48:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:03.670 22:48:31 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:03.670 22:48:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:03.670 22:48:31 -- common/autotest_common.sh@643 -- # waitforlisten 3890183 /var/tmp/spdk2.sock 00:06:03.670 22:48:31 -- common/autotest_common.sh@819 -- # '[' -z 3890183 ']' 00:06:03.670 22:48:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.670 22:48:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.670 22:48:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.670 22:48:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.670 22:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:03.670 [2024-06-09 22:48:31.529186] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:03.670 [2024-06-09 22:48:31.529237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890183 ] 00:06:03.670 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.670 [2024-06-09 22:48:31.616812] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3890033 has claimed it. 00:06:03.670 [2024-06-09 22:48:31.616852] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3890183) - No such process 00:06:04.241 ERROR: process (pid: 3890183) is no longer running 00:06:04.241 22:48:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.241 22:48:32 -- common/autotest_common.sh@852 -- # return 1 00:06:04.241 22:48:32 -- common/autotest_common.sh@643 -- # es=1 00:06:04.241 22:48:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:04.241 22:48:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:04.241 22:48:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:04.241 22:48:32 -- event/cpu_locks.sh@122 -- # locks_exist 3890033 00:06:04.241 22:48:32 -- event/cpu_locks.sh@22 -- # lslocks -p 3890033 00:06:04.241 22:48:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.501 lslocks: write error 00:06:04.501 22:48:32 -- event/cpu_locks.sh@124 -- # killprocess 3890033 00:06:04.501 22:48:32 -- common/autotest_common.sh@926 -- # '[' -z 3890033 ']' 00:06:04.501 22:48:32 -- common/autotest_common.sh@930 -- # kill -0 3890033 00:06:04.501 22:48:32 -- common/autotest_common.sh@931 -- # uname 00:06:04.501 22:48:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:04.501 22:48:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3890033 00:06:04.501 22:48:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:04.501 22:48:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:04.501 22:48:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3890033' 00:06:04.501 killing process with pid 3890033 00:06:04.501 22:48:32 -- common/autotest_common.sh@945 -- # kill 3890033 00:06:04.501 22:48:32 -- common/autotest_common.sh@950 -- # wait 3890033 00:06:04.762 00:06:04.762 real 0m2.197s 00:06:04.762 user 0m2.429s 00:06:04.762 sys 0m0.593s 00:06:04.762 22:48:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.762 22:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:04.762 ************************************ 00:06:04.762 END TEST locking_app_on_locked_coremask 00:06:04.762 ************************************ 00:06:04.762 22:48:32 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:04.762 22:48:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.762 22:48:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.762 22:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:04.762 ************************************ 00:06:04.762 START TEST locking_overlapped_coremask 00:06:04.762 ************************************ 00:06:04.762 22:48:32 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:04.762 22:48:32 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3890441 00:06:04.762 22:48:32 -- event/cpu_locks.sh@133 -- # waitforlisten 3890441 /var/tmp/spdk.sock 00:06:04.762 22:48:32 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:04.762 22:48:32 -- common/autotest_common.sh@819 -- # '[' -z 3890441 ']' 00:06:04.762 22:48:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.762 22:48:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:04.762 22:48:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.762 22:48:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:04.762 22:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:05.023 [2024-06-09 22:48:32.975341] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:05.023 [2024-06-09 22:48:32.975400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890441 ] 00:06:05.023 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.023 [2024-06-09 22:48:33.034802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.023 [2024-06-09 22:48:33.101241] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.023 [2024-06-09 22:48:33.101397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.023 [2024-06-09 22:48:33.101538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.023 [2024-06-09 22:48:33.101633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.967 22:48:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:05.967 22:48:33 -- common/autotest_common.sh@852 -- # return 0 00:06:05.967 22:48:33 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3890749 00:06:05.967 22:48:33 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3890749 /var/tmp/spdk2.sock 00:06:05.967 22:48:33 -- common/autotest_common.sh@640 -- # local es=0 00:06:05.967 22:48:33 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:05.967 22:48:33 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3890749 /var/tmp/spdk2.sock 00:06:05.967 22:48:33 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:05.967 22:48:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:05.967 22:48:33 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:05.967 22:48:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:05.967 22:48:33 -- common/autotest_common.sh@643 -- # waitforlisten 3890749 /var/tmp/spdk2.sock 00:06:05.967 22:48:33 -- common/autotest_common.sh@819 -- # '[' -z 3890749 ']' 00:06:05.967 22:48:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.967 22:48:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:05.967 22:48:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.967 22:48:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:05.967 22:48:33 -- common/autotest_common.sh@10 -- # set +x 00:06:05.967 [2024-06-09 22:48:33.858783] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:05.967 [2024-06-09 22:48:33.858836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890749 ] 00:06:05.967 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.967 [2024-06-09 22:48:33.929969] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3890441 has claimed it. 00:06:05.967 [2024-06-09 22:48:33.930001] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3890749) - No such process 00:06:06.541 ERROR: process (pid: 3890749) is no longer running 00:06:06.541 22:48:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:06.541 22:48:34 -- common/autotest_common.sh@852 -- # return 1 00:06:06.541 22:48:34 -- common/autotest_common.sh@643 -- # es=1 00:06:06.541 22:48:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:06.541 22:48:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:06.541 22:48:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:06.541 22:48:34 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:06.541 22:48:34 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.541 22:48:34 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.541 22:48:34 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.541 22:48:34 -- event/cpu_locks.sh@141 -- # killprocess 3890441 00:06:06.541 22:48:34 -- common/autotest_common.sh@926 -- # '[' -z 3890441 ']' 00:06:06.541 22:48:34 -- common/autotest_common.sh@930 -- # kill -0 3890441 00:06:06.541 22:48:34 -- common/autotest_common.sh@931 -- # uname 00:06:06.541 22:48:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:06.541 22:48:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3890441 00:06:06.541 22:48:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:06.541 22:48:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:06.541 22:48:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3890441' 00:06:06.541 killing process with pid 3890441 00:06:06.541 22:48:34 -- common/autotest_common.sh@945 -- # kill 3890441 00:06:06.541 22:48:34 -- common/autotest_common.sh@950 -- # wait 3890441 00:06:06.802 00:06:06.802 real 0m1.802s 00:06:06.802 user 0m5.187s 00:06:06.802 sys 0m0.360s 00:06:06.802 22:48:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.802 22:48:34 -- common/autotest_common.sh@10 -- # set +x 00:06:06.802 ************************************ 00:06:06.802 END TEST locking_overlapped_coremask 00:06:06.802 ************************************ 00:06:06.802 22:48:34 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:06.802 22:48:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:06.802 22:48:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.802 22:48:34 -- common/autotest_common.sh@10 -- # set +x 00:06:06.802 ************************************ 00:06:06.802 START TEST locking_overlapped_coremask_via_rpc 00:06:06.802 ************************************ 00:06:06.802 22:48:34 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:06.802 22:48:34 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3890867 00:06:06.802 22:48:34 -- event/cpu_locks.sh@149 -- # waitforlisten 3890867 /var/tmp/spdk.sock 00:06:06.802 22:48:34 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:06.802 22:48:34 -- common/autotest_common.sh@819 -- # '[' -z 3890867 ']' 00:06:06.802 22:48:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.802 22:48:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:06.803 22:48:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.803 22:48:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:06.803 22:48:34 -- common/autotest_common.sh@10 -- # set +x 00:06:06.803 [2024-06-09 22:48:34.820730] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:06.803 [2024-06-09 22:48:34.820791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890867 ] 00:06:06.803 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.803 [2024-06-09 22:48:34.880004] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.803 [2024-06-09 22:48:34.880035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.803 [2024-06-09 22:48:34.945847] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:06.803 [2024-06-09 22:48:34.946063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.803 [2024-06-09 22:48:34.946180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.803 [2024-06-09 22:48:34.946183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.747 22:48:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:07.747 22:48:35 -- common/autotest_common.sh@852 -- # return 0 00:06:07.747 22:48:35 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3891128 00:06:07.747 22:48:35 -- event/cpu_locks.sh@153 -- # waitforlisten 3891128 /var/tmp/spdk2.sock 00:06:07.747 22:48:35 -- common/autotest_common.sh@819 -- # '[' -z 3891128 ']' 00:06:07.747 22:48:35 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:07.747 22:48:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.747 22:48:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.747 22:48:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.747 22:48:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.747 22:48:35 -- common/autotest_common.sh@10 -- # set +x 00:06:07.747 [2024-06-09 22:48:35.636679] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:07.747 [2024-06-09 22:48:35.636732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891128 ] 00:06:07.747 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.747 [2024-06-09 22:48:35.710905] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.747 [2024-06-09 22:48:35.710929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.747 [2024-06-09 22:48:35.810155] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.747 [2024-06-09 22:48:35.810383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.747 [2024-06-09 22:48:35.813464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.747 [2024-06-09 22:48:35.813467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:08.319 22:48:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.319 22:48:36 -- common/autotest_common.sh@852 -- # return 0 00:06:08.319 22:48:36 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:08.319 22:48:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.319 22:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:08.319 22:48:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.319 22:48:36 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.319 22:48:36 -- common/autotest_common.sh@640 -- # local es=0 00:06:08.319 22:48:36 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.319 22:48:36 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:08.319 22:48:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:08.319 22:48:36 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:08.319 22:48:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:08.319 22:48:36 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:08.319 22:48:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.319 22:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:08.319 [2024-06-09 22:48:36.409461] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3890867 has claimed it. 00:06:08.319 request: 00:06:08.319 { 00:06:08.319 "method": "framework_enable_cpumask_locks", 00:06:08.319 "req_id": 1 00:06:08.319 } 00:06:08.319 Got JSON-RPC error response 00:06:08.319 response: 00:06:08.319 { 00:06:08.319 "code": -32603, 00:06:08.319 "message": "Failed to claim CPU core: 2" 00:06:08.319 } 00:06:08.319 22:48:36 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:08.319 22:48:36 -- common/autotest_common.sh@643 -- # es=1 00:06:08.319 22:48:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:08.319 22:48:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:08.319 22:48:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:08.319 22:48:36 -- event/cpu_locks.sh@158 -- # waitforlisten 3890867 /var/tmp/spdk.sock 00:06:08.319 22:48:36 -- common/autotest_common.sh@819 -- # '[' -z 3890867 ']' 00:06:08.319 22:48:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.319 22:48:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.320 22:48:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.320 22:48:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.320 22:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:08.581 22:48:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.581 22:48:36 -- common/autotest_common.sh@852 -- # return 0 00:06:08.581 22:48:36 -- event/cpu_locks.sh@159 -- # waitforlisten 3891128 /var/tmp/spdk2.sock 00:06:08.581 22:48:36 -- common/autotest_common.sh@819 -- # '[' -z 3891128 ']' 00:06:08.581 22:48:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.581 22:48:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.581 22:48:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.581 22:48:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.581 22:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:08.581 22:48:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.581 22:48:36 -- common/autotest_common.sh@852 -- # return 0 00:06:08.581 22:48:36 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:08.581 22:48:36 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.581 22:48:36 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.581 22:48:36 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.581 00:06:08.581 real 0m1.974s 00:06:08.581 user 0m0.732s 00:06:08.581 sys 0m0.168s 00:06:08.581 22:48:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.581 22:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:08.581 ************************************ 00:06:08.581 END TEST locking_overlapped_coremask_via_rpc 00:06:08.581 ************************************ 00:06:08.841 22:48:36 -- event/cpu_locks.sh@174 -- # cleanup 00:06:08.841 22:48:36 -- event/cpu_locks.sh@15 -- # [[ -z 3890867 ]] 00:06:08.841 22:48:36 -- event/cpu_locks.sh@15 -- # killprocess 3890867 00:06:08.841 22:48:36 -- common/autotest_common.sh@926 -- # '[' -z 3890867 ']' 00:06:08.841 22:48:36 -- common/autotest_common.sh@930 -- # kill -0 3890867 00:06:08.841 22:48:36 -- common/autotest_common.sh@931 -- # uname 00:06:08.841 22:48:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:08.841 22:48:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3890867 00:06:08.841 22:48:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:08.841 22:48:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:08.841 22:48:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3890867' 00:06:08.841 killing process with pid 3890867 00:06:08.841 22:48:36 -- common/autotest_common.sh@945 -- # kill 3890867 00:06:08.841 22:48:36 -- common/autotest_common.sh@950 -- # wait 3890867 00:06:09.102 22:48:37 -- event/cpu_locks.sh@16 -- # [[ -z 3891128 ]] 00:06:09.102 22:48:37 -- event/cpu_locks.sh@16 -- # killprocess 3891128 00:06:09.102 22:48:37 -- common/autotest_common.sh@926 -- # '[' -z 3891128 ']' 00:06:09.102 22:48:37 -- common/autotest_common.sh@930 -- # kill -0 3891128 00:06:09.102 22:48:37 -- common/autotest_common.sh@931 -- # uname 00:06:09.102 22:48:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:09.102 22:48:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3891128 00:06:09.102 22:48:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:09.103 22:48:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:09.103 22:48:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3891128' 00:06:09.103 killing process with pid 3891128 00:06:09.103 22:48:37 -- common/autotest_common.sh@945 -- # kill 3891128 00:06:09.103 22:48:37 -- common/autotest_common.sh@950 -- # wait 3891128 00:06:09.364 22:48:37 -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.364 22:48:37 -- event/cpu_locks.sh@1 -- # cleanup 00:06:09.364 22:48:37 -- event/cpu_locks.sh@15 -- # [[ -z 3890867 ]] 00:06:09.364 22:48:37 -- event/cpu_locks.sh@15 -- # killprocess 3890867 00:06:09.364 22:48:37 -- common/autotest_common.sh@926 -- # '[' -z 3890867 ']' 00:06:09.364 22:48:37 -- common/autotest_common.sh@930 -- # kill -0 3890867 00:06:09.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3890867) - No such process 00:06:09.364 22:48:37 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3890867 is not found' 00:06:09.364 Process with pid 3890867 is not found 00:06:09.364 22:48:37 -- event/cpu_locks.sh@16 -- # [[ -z 3891128 ]] 00:06:09.364 22:48:37 -- event/cpu_locks.sh@16 -- # killprocess 3891128 00:06:09.364 22:48:37 -- common/autotest_common.sh@926 -- # '[' -z 3891128 ']' 00:06:09.364 22:48:37 -- common/autotest_common.sh@930 -- # kill -0 3891128 00:06:09.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3891128) - No such process 00:06:09.364 22:48:37 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3891128 is not found' 00:06:09.364 Process with pid 3891128 is not found 00:06:09.364 22:48:37 -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.364 00:06:09.364 real 0m15.423s 00:06:09.364 user 0m26.873s 00:06:09.364 sys 0m4.492s 00:06:09.364 22:48:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.364 22:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.364 ************************************ 00:06:09.364 END TEST cpu_locks 00:06:09.364 ************************************ 00:06:09.364 00:06:09.364 real 0m41.131s 00:06:09.364 user 1m20.709s 00:06:09.364 sys 0m7.435s 00:06:09.364 22:48:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.364 22:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.364 ************************************ 00:06:09.364 END TEST event 00:06:09.364 ************************************ 00:06:09.364 22:48:37 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:09.364 22:48:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:09.364 22:48:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.364 22:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.364 ************************************ 00:06:09.364 START TEST thread 00:06:09.364 ************************************ 00:06:09.364 22:48:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:09.364 * Looking for test storage... 00:06:09.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:09.364 22:48:37 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.364 22:48:37 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:09.364 22:48:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.364 22:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:09.364 ************************************ 00:06:09.364 START TEST thread_poller_perf 00:06:09.364 ************************************ 00:06:09.364 22:48:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.364 [2024-06-09 22:48:37.498065] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:09.364 [2024-06-09 22:48:37.498171] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891562 ] 00:06:09.364 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.626 [2024-06-09 22:48:37.559914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.626 [2024-06-09 22:48:37.621434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.626 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:10.570 ====================================== 00:06:10.570 busy:2414067302 (cyc) 00:06:10.570 total_run_count: 274000 00:06:10.570 tsc_hz: 2400000000 (cyc) 00:06:10.570 ====================================== 00:06:10.570 poller_cost: 8810 (cyc), 3670 (nsec) 00:06:10.570 00:06:10.570 real 0m1.206s 00:06:10.570 user 0m1.132s 00:06:10.570 sys 0m0.069s 00:06:10.570 22:48:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.570 22:48:38 -- common/autotest_common.sh@10 -- # set +x 00:06:10.570 ************************************ 00:06:10.570 END TEST thread_poller_perf 00:06:10.570 ************************************ 00:06:10.570 22:48:38 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.570 22:48:38 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:10.570 22:48:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.570 22:48:38 -- common/autotest_common.sh@10 -- # set +x 00:06:10.570 ************************************ 00:06:10.570 START TEST thread_poller_perf 00:06:10.570 ************************************ 00:06:10.570 22:48:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:10.830 [2024-06-09 22:48:38.748245] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:10.830 [2024-06-09 22:48:38.748345] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891914 ] 00:06:10.830 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.830 [2024-06-09 22:48:38.811678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.830 [2024-06-09 22:48:38.872106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.830 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:11.773 ====================================== 00:06:11.773 busy:2402636392 (cyc) 00:06:11.773 total_run_count: 3810000 00:06:11.773 tsc_hz: 2400000000 (cyc) 00:06:11.773 ====================================== 00:06:11.773 poller_cost: 630 (cyc), 262 (nsec) 00:06:11.773 00:06:11.773 real 0m1.198s 00:06:11.773 user 0m1.131s 00:06:11.773 sys 0m0.062s 00:06:11.773 22:48:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.773 22:48:39 -- common/autotest_common.sh@10 -- # set +x 00:06:11.773 ************************************ 00:06:11.773 END TEST thread_poller_perf 00:06:11.773 ************************************ 00:06:12.035 22:48:39 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:12.035 00:06:12.035 real 0m2.583s 00:06:12.035 user 0m2.338s 00:06:12.035 sys 0m0.259s 00:06:12.035 22:48:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.035 22:48:39 -- common/autotest_common.sh@10 -- # set +x 00:06:12.035 ************************************ 00:06:12.035 END TEST thread 00:06:12.035 ************************************ 00:06:12.035 22:48:39 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:12.035 22:48:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.035 22:48:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.035 22:48:39 -- common/autotest_common.sh@10 -- # set +x 00:06:12.035 ************************************ 00:06:12.035 START TEST accel 00:06:12.035 ************************************ 00:06:12.035 22:48:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:12.035 * Looking for test storage... 00:06:12.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:12.035 22:48:40 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:12.035 22:48:40 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:12.035 22:48:40 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.035 22:48:40 -- accel/accel.sh@59 -- # spdk_tgt_pid=3892212 00:06:12.035 22:48:40 -- accel/accel.sh@60 -- # waitforlisten 3892212 00:06:12.035 22:48:40 -- common/autotest_common.sh@819 -- # '[' -z 3892212 ']' 00:06:12.035 22:48:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.035 22:48:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:12.035 22:48:40 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:12.035 22:48:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.035 22:48:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:12.035 22:48:40 -- accel/accel.sh@58 -- # build_accel_config 00:06:12.035 22:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.035 22:48:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.035 22:48:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.035 22:48:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.035 22:48:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.035 22:48:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.035 22:48:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.035 22:48:40 -- accel/accel.sh@42 -- # jq -r . 00:06:12.035 [2024-06-09 22:48:40.153304] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:12.035 [2024-06-09 22:48:40.153384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892212 ] 00:06:12.035 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.309 [2024-06-09 22:48:40.221470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.309 [2024-06-09 22:48:40.292667] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.309 [2024-06-09 22:48:40.292805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.881 22:48:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:12.881 22:48:40 -- common/autotest_common.sh@852 -- # return 0 00:06:12.881 22:48:40 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:12.881 22:48:40 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:12.881 22:48:40 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:12.881 22:48:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.881 22:48:40 -- common/autotest_common.sh@10 -- # set +x 00:06:12.881 22:48:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.881 22:48:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # IFS== 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.881 22:48:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.881 22:48:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # IFS== 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.881 22:48:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.881 22:48:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # IFS== 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.881 22:48:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.881 22:48:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # IFS== 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.881 22:48:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.881 22:48:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # IFS== 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.881 22:48:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.881 22:48:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # IFS== 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.881 22:48:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.881 22:48:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # IFS== 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.881 22:48:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.881 22:48:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # IFS== 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.881 22:48:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.881 22:48:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # IFS== 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.881 22:48:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.881 22:48:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # IFS== 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.881 22:48:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.881 22:48:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # IFS== 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.881 22:48:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.881 22:48:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # IFS== 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.881 22:48:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.881 22:48:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # IFS== 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.881 22:48:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.881 22:48:40 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # IFS== 00:06:12.881 22:48:40 -- accel/accel.sh@64 -- # read -r opc module 00:06:12.881 22:48:40 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:12.881 22:48:40 -- accel/accel.sh@67 -- # killprocess 3892212 00:06:12.881 22:48:40 -- common/autotest_common.sh@926 -- # '[' -z 3892212 ']' 00:06:12.881 22:48:40 -- common/autotest_common.sh@930 -- # kill -0 3892212 00:06:12.881 22:48:40 -- common/autotest_common.sh@931 -- # uname 00:06:12.881 22:48:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:12.881 22:48:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3892212 00:06:12.881 22:48:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:12.881 22:48:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:12.881 22:48:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3892212' 00:06:12.881 killing process with pid 3892212 00:06:12.881 22:48:41 -- common/autotest_common.sh@945 -- # kill 3892212 00:06:12.881 22:48:41 -- common/autotest_common.sh@950 -- # wait 3892212 00:06:13.142 22:48:41 -- accel/accel.sh@68 -- # trap - ERR 00:06:13.142 22:48:41 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:13.142 22:48:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:13.142 22:48:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.142 22:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.142 22:48:41 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:13.142 22:48:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:13.142 22:48:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.142 22:48:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.142 22:48:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.142 22:48:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.142 22:48:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.142 22:48:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.142 22:48:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.142 22:48:41 -- accel/accel.sh@42 -- # jq -r . 00:06:13.142 22:48:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.142 22:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.143 22:48:41 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:13.143 22:48:41 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:13.143 22:48:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.143 22:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.143 ************************************ 00:06:13.143 START TEST accel_missing_filename 00:06:13.143 ************************************ 00:06:13.143 22:48:41 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:13.143 22:48:41 -- common/autotest_common.sh@640 -- # local es=0 00:06:13.143 22:48:41 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:13.143 22:48:41 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:13.143 22:48:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:13.143 22:48:41 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:13.143 22:48:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:13.143 22:48:41 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:13.143 22:48:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:13.143 22:48:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.143 22:48:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.143 22:48:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.143 22:48:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.143 22:48:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.143 22:48:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.143 22:48:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.143 22:48:41 -- accel/accel.sh@42 -- # jq -r . 00:06:13.404 [2024-06-09 22:48:41.330443] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:13.404 [2024-06-09 22:48:41.330544] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892364 ] 00:06:13.404 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.405 [2024-06-09 22:48:41.392229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.405 [2024-06-09 22:48:41.454710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.405 [2024-06-09 22:48:41.486648] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.405 [2024-06-09 22:48:41.523649] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:13.405 A filename is required. 00:06:13.405 22:48:41 -- common/autotest_common.sh@643 -- # es=234 00:06:13.405 22:48:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:13.405 22:48:41 -- common/autotest_common.sh@652 -- # es=106 00:06:13.405 22:48:41 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:13.405 22:48:41 -- common/autotest_common.sh@660 -- # es=1 00:06:13.405 22:48:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:13.405 00:06:13.405 real 0m0.278s 00:06:13.405 user 0m0.215s 00:06:13.405 sys 0m0.101s 00:06:13.405 22:48:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.405 22:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.405 ************************************ 00:06:13.405 END TEST accel_missing_filename 00:06:13.405 ************************************ 00:06:13.667 22:48:41 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:13.667 22:48:41 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:13.667 22:48:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.667 22:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.667 ************************************ 00:06:13.667 START TEST accel_compress_verify 00:06:13.667 ************************************ 00:06:13.667 22:48:41 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:13.667 22:48:41 -- common/autotest_common.sh@640 -- # local es=0 00:06:13.667 22:48:41 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:13.667 22:48:41 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:13.667 22:48:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:13.667 22:48:41 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:13.667 22:48:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:13.667 22:48:41 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:13.667 22:48:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:13.667 22:48:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.667 22:48:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.667 22:48:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.667 22:48:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.667 22:48:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.667 22:48:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.667 22:48:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.667 22:48:41 -- accel/accel.sh@42 -- # jq -r . 00:06:13.667 [2024-06-09 22:48:41.650639] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:13.667 [2024-06-09 22:48:41.650733] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892544 ] 00:06:13.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.667 [2024-06-09 22:48:41.712435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.667 [2024-06-09 22:48:41.774366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.667 [2024-06-09 22:48:41.806077] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.667 [2024-06-09 22:48:41.843100] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:13.930 00:06:13.930 Compression does not support the verify option, aborting. 00:06:13.930 22:48:41 -- common/autotest_common.sh@643 -- # es=161 00:06:13.930 22:48:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:13.930 22:48:41 -- common/autotest_common.sh@652 -- # es=33 00:06:13.930 22:48:41 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:13.930 22:48:41 -- common/autotest_common.sh@660 -- # es=1 00:06:13.930 22:48:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:13.930 00:06:13.930 real 0m0.276s 00:06:13.930 user 0m0.217s 00:06:13.930 sys 0m0.100s 00:06:13.930 22:48:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.930 22:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.930 ************************************ 00:06:13.930 END TEST accel_compress_verify 00:06:13.930 ************************************ 00:06:13.930 22:48:41 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:13.930 22:48:41 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:13.930 22:48:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.930 22:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.930 ************************************ 00:06:13.930 START TEST accel_wrong_workload 00:06:13.930 ************************************ 00:06:13.930 22:48:41 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:13.930 22:48:41 -- common/autotest_common.sh@640 -- # local es=0 00:06:13.930 22:48:41 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:13.930 22:48:41 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:13.930 22:48:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:13.930 22:48:41 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:13.930 22:48:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:13.930 22:48:41 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:13.930 22:48:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:13.930 22:48:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.930 22:48:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.930 22:48:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.930 22:48:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.930 22:48:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.930 22:48:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.930 22:48:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.930 22:48:41 -- accel/accel.sh@42 -- # jq -r . 00:06:13.930 Unsupported workload type: foobar 00:06:13.930 [2024-06-09 22:48:41.967867] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:13.930 accel_perf options: 00:06:13.930 [-h help message] 00:06:13.930 [-q queue depth per core] 00:06:13.930 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:13.930 [-T number of threads per core 00:06:13.930 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:13.930 [-t time in seconds] 00:06:13.930 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:13.930 [ dif_verify, , dif_generate, dif_generate_copy 00:06:13.930 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:13.930 [-l for compress/decompress workloads, name of uncompressed input file 00:06:13.930 [-S for crc32c workload, use this seed value (default 0) 00:06:13.930 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:13.930 [-f for fill workload, use this BYTE value (default 255) 00:06:13.930 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:13.930 [-y verify result if this switch is on] 00:06:13.930 [-a tasks to allocate per core (default: same value as -q)] 00:06:13.930 Can be used to spread operations across a wider range of memory. 00:06:13.930 22:48:41 -- common/autotest_common.sh@643 -- # es=1 00:06:13.930 22:48:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:13.930 22:48:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:13.930 22:48:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:13.930 00:06:13.930 real 0m0.036s 00:06:13.930 user 0m0.027s 00:06:13.930 sys 0m0.009s 00:06:13.930 22:48:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.930 22:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:13.930 ************************************ 00:06:13.930 END TEST accel_wrong_workload 00:06:13.930 ************************************ 00:06:13.930 Error: writing output failed: Broken pipe 00:06:13.930 22:48:42 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:13.930 22:48:42 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:13.930 22:48:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.930 22:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.930 ************************************ 00:06:13.930 START TEST accel_negative_buffers 00:06:13.930 ************************************ 00:06:13.930 22:48:42 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:13.930 22:48:42 -- common/autotest_common.sh@640 -- # local es=0 00:06:13.930 22:48:42 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:13.930 22:48:42 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:13.930 22:48:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:13.930 22:48:42 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:13.930 22:48:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:13.930 22:48:42 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:13.930 22:48:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:13.930 22:48:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.930 22:48:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.930 22:48:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.930 22:48:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.930 22:48:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.930 22:48:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.930 22:48:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.930 22:48:42 -- accel/accel.sh@42 -- # jq -r . 00:06:13.930 -x option must be non-negative. 00:06:13.930 [2024-06-09 22:48:42.047106] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:13.930 accel_perf options: 00:06:13.930 [-h help message] 00:06:13.930 [-q queue depth per core] 00:06:13.930 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:13.930 [-T number of threads per core 00:06:13.930 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:13.930 [-t time in seconds] 00:06:13.930 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:13.930 [ dif_verify, , dif_generate, dif_generate_copy 00:06:13.930 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:13.930 [-l for compress/decompress workloads, name of uncompressed input file 00:06:13.930 [-S for crc32c workload, use this seed value (default 0) 00:06:13.930 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:13.930 [-f for fill workload, use this BYTE value (default 255) 00:06:13.930 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:13.930 [-y verify result if this switch is on] 00:06:13.930 [-a tasks to allocate per core (default: same value as -q)] 00:06:13.930 Can be used to spread operations across a wider range of memory. 00:06:13.930 22:48:42 -- common/autotest_common.sh@643 -- # es=1 00:06:13.930 22:48:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:13.930 22:48:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:13.930 22:48:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:13.930 00:06:13.930 real 0m0.036s 00:06:13.930 user 0m0.020s 00:06:13.930 sys 0m0.016s 00:06:13.930 22:48:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.930 22:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.930 ************************************ 00:06:13.930 END TEST accel_negative_buffers 00:06:13.930 ************************************ 00:06:13.930 Error: writing output failed: Broken pipe 00:06:13.930 22:48:42 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:13.930 22:48:42 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:13.930 22:48:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.930 22:48:42 -- common/autotest_common.sh@10 -- # set +x 00:06:13.930 ************************************ 00:06:13.930 START TEST accel_crc32c 00:06:13.930 ************************************ 00:06:13.930 22:48:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:13.930 22:48:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.930 22:48:42 -- accel/accel.sh@17 -- # local accel_module 00:06:13.930 22:48:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:13.930 22:48:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:13.930 22:48:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.930 22:48:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.930 22:48:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.930 22:48:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.930 22:48:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.930 22:48:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.930 22:48:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.930 22:48:42 -- accel/accel.sh@42 -- # jq -r . 00:06:14.192 [2024-06-09 22:48:42.122521] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:14.192 [2024-06-09 22:48:42.122589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892761 ] 00:06:14.192 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.192 [2024-06-09 22:48:42.183719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.192 [2024-06-09 22:48:42.247506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.586 22:48:43 -- accel/accel.sh@18 -- # out=' 00:06:15.586 SPDK Configuration: 00:06:15.586 Core mask: 0x1 00:06:15.586 00:06:15.586 Accel Perf Configuration: 00:06:15.586 Workload Type: crc32c 00:06:15.586 CRC-32C seed: 32 00:06:15.586 Transfer size: 4096 bytes 00:06:15.586 Vector count 1 00:06:15.586 Module: software 00:06:15.586 Queue depth: 32 00:06:15.586 Allocate depth: 32 00:06:15.586 # threads/core: 1 00:06:15.586 Run time: 1 seconds 00:06:15.586 Verify: Yes 00:06:15.586 00:06:15.586 Running for 1 seconds... 00:06:15.586 00:06:15.586 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.586 ------------------------------------------------------------------------------------ 00:06:15.586 0,0 449536/s 1756 MiB/s 0 0 00:06:15.586 ==================================================================================== 00:06:15.586 Total 449536/s 1756 MiB/s 0 0' 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:15.586 22:48:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:15.586 22:48:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.586 22:48:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.586 22:48:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.586 22:48:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.586 22:48:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.586 22:48:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.586 22:48:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.586 22:48:43 -- accel/accel.sh@42 -- # jq -r . 00:06:15.586 [2024-06-09 22:48:43.399731] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:15.586 [2024-06-09 22:48:43.399802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3892984 ] 00:06:15.586 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.586 [2024-06-09 22:48:43.459177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.586 [2024-06-09 22:48:43.522475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val= 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val= 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val=0x1 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val= 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val= 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val=crc32c 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val=32 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val= 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val=software 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val=32 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val=32 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val=1 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val=Yes 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val= 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:15.586 22:48:43 -- accel/accel.sh@21 -- # val= 00:06:15.586 22:48:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # IFS=: 00:06:15.586 22:48:43 -- accel/accel.sh@20 -- # read -r var val 00:06:16.530 22:48:44 -- accel/accel.sh@21 -- # val= 00:06:16.530 22:48:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.530 22:48:44 -- accel/accel.sh@20 -- # IFS=: 00:06:16.530 22:48:44 -- accel/accel.sh@20 -- # read -r var val 00:06:16.530 22:48:44 -- accel/accel.sh@21 -- # val= 00:06:16.530 22:48:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.530 22:48:44 -- accel/accel.sh@20 -- # IFS=: 00:06:16.530 22:48:44 -- accel/accel.sh@20 -- # read -r var val 00:06:16.530 22:48:44 -- accel/accel.sh@21 -- # val= 00:06:16.530 22:48:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.530 22:48:44 -- accel/accel.sh@20 -- # IFS=: 00:06:16.530 22:48:44 -- accel/accel.sh@20 -- # read -r var val 00:06:16.530 22:48:44 -- accel/accel.sh@21 -- # val= 00:06:16.530 22:48:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.530 22:48:44 -- accel/accel.sh@20 -- # IFS=: 00:06:16.530 22:48:44 -- accel/accel.sh@20 -- # read -r var val 00:06:16.530 22:48:44 -- accel/accel.sh@21 -- # val= 00:06:16.530 22:48:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.530 22:48:44 -- accel/accel.sh@20 -- # IFS=: 00:06:16.530 22:48:44 -- accel/accel.sh@20 -- # read -r var val 00:06:16.530 22:48:44 -- accel/accel.sh@21 -- # val= 00:06:16.530 22:48:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.530 22:48:44 -- accel/accel.sh@20 -- # IFS=: 00:06:16.530 22:48:44 -- accel/accel.sh@20 -- # read -r var val 00:06:16.530 22:48:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:16.530 22:48:44 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:16.530 22:48:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.530 00:06:16.530 real 0m2.554s 00:06:16.530 user 0m2.366s 00:06:16.530 sys 0m0.196s 00:06:16.530 22:48:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.530 22:48:44 -- common/autotest_common.sh@10 -- # set +x 00:06:16.530 ************************************ 00:06:16.530 END TEST accel_crc32c 00:06:16.530 ************************************ 00:06:16.530 22:48:44 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:16.530 22:48:44 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:16.530 22:48:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.530 22:48:44 -- common/autotest_common.sh@10 -- # set +x 00:06:16.530 ************************************ 00:06:16.530 START TEST accel_crc32c_C2 00:06:16.530 ************************************ 00:06:16.530 22:48:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:16.530 22:48:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.530 22:48:44 -- accel/accel.sh@17 -- # local accel_module 00:06:16.530 22:48:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:16.530 22:48:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:16.530 22:48:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.530 22:48:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.530 22:48:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.530 22:48:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.530 22:48:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.530 22:48:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.530 22:48:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.530 22:48:44 -- accel/accel.sh@42 -- # jq -r . 00:06:16.791 [2024-06-09 22:48:44.721284] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:16.791 [2024-06-09 22:48:44.721359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893158 ] 00:06:16.791 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.791 [2024-06-09 22:48:44.782293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.791 [2024-06-09 22:48:44.846113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.208 22:48:45 -- accel/accel.sh@18 -- # out=' 00:06:18.208 SPDK Configuration: 00:06:18.208 Core mask: 0x1 00:06:18.208 00:06:18.208 Accel Perf Configuration: 00:06:18.208 Workload Type: crc32c 00:06:18.208 CRC-32C seed: 0 00:06:18.208 Transfer size: 4096 bytes 00:06:18.208 Vector count 2 00:06:18.208 Module: software 00:06:18.208 Queue depth: 32 00:06:18.208 Allocate depth: 32 00:06:18.208 # threads/core: 1 00:06:18.208 Run time: 1 seconds 00:06:18.209 Verify: Yes 00:06:18.209 00:06:18.209 Running for 1 seconds... 00:06:18.209 00:06:18.209 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.209 ------------------------------------------------------------------------------------ 00:06:18.209 0,0 378112/s 2954 MiB/s 0 0 00:06:18.209 ==================================================================================== 00:06:18.209 Total 378112/s 1477 MiB/s 0 0' 00:06:18.209 22:48:45 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:45 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:18.209 22:48:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:18.209 22:48:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.209 22:48:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.209 22:48:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.209 22:48:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.209 22:48:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.209 22:48:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.209 22:48:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.209 22:48:45 -- accel/accel.sh@42 -- # jq -r . 00:06:18.209 [2024-06-09 22:48:45.997740] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:18.209 [2024-06-09 22:48:45.997812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893466 ] 00:06:18.209 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.209 [2024-06-09 22:48:46.057905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.209 [2024-06-09 22:48:46.119602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val= 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val= 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val=0x1 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val= 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val= 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val=crc32c 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val=0 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val= 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val=software 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val=32 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val=32 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val=1 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val=Yes 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val= 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:18.209 22:48:46 -- accel/accel.sh@21 -- # val= 00:06:18.209 22:48:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # IFS=: 00:06:18.209 22:48:46 -- accel/accel.sh@20 -- # read -r var val 00:06:19.153 22:48:47 -- accel/accel.sh@21 -- # val= 00:06:19.153 22:48:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.153 22:48:47 -- accel/accel.sh@20 -- # IFS=: 00:06:19.153 22:48:47 -- accel/accel.sh@20 -- # read -r var val 00:06:19.153 22:48:47 -- accel/accel.sh@21 -- # val= 00:06:19.153 22:48:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.153 22:48:47 -- accel/accel.sh@20 -- # IFS=: 00:06:19.153 22:48:47 -- accel/accel.sh@20 -- # read -r var val 00:06:19.153 22:48:47 -- accel/accel.sh@21 -- # val= 00:06:19.153 22:48:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.153 22:48:47 -- accel/accel.sh@20 -- # IFS=: 00:06:19.153 22:48:47 -- accel/accel.sh@20 -- # read -r var val 00:06:19.153 22:48:47 -- accel/accel.sh@21 -- # val= 00:06:19.153 22:48:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.153 22:48:47 -- accel/accel.sh@20 -- # IFS=: 00:06:19.153 22:48:47 -- accel/accel.sh@20 -- # read -r var val 00:06:19.153 22:48:47 -- accel/accel.sh@21 -- # val= 00:06:19.153 22:48:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.153 22:48:47 -- accel/accel.sh@20 -- # IFS=: 00:06:19.153 22:48:47 -- accel/accel.sh@20 -- # read -r var val 00:06:19.153 22:48:47 -- accel/accel.sh@21 -- # val= 00:06:19.153 22:48:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.153 22:48:47 -- accel/accel.sh@20 -- # IFS=: 00:06:19.153 22:48:47 -- accel/accel.sh@20 -- # read -r var val 00:06:19.153 22:48:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.153 22:48:47 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:19.153 22:48:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.153 00:06:19.153 real 0m2.554s 00:06:19.153 user 0m2.369s 00:06:19.153 sys 0m0.191s 00:06:19.153 22:48:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.153 22:48:47 -- common/autotest_common.sh@10 -- # set +x 00:06:19.153 ************************************ 00:06:19.153 END TEST accel_crc32c_C2 00:06:19.153 ************************************ 00:06:19.153 22:48:47 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:19.153 22:48:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:19.153 22:48:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.153 22:48:47 -- common/autotest_common.sh@10 -- # set +x 00:06:19.153 ************************************ 00:06:19.153 START TEST accel_copy 00:06:19.153 ************************************ 00:06:19.153 22:48:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:19.153 22:48:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.153 22:48:47 -- accel/accel.sh@17 -- # local accel_module 00:06:19.153 22:48:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:19.153 22:48:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:19.153 22:48:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.153 22:48:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.153 22:48:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.153 22:48:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.153 22:48:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.153 22:48:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.153 22:48:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.153 22:48:47 -- accel/accel.sh@42 -- # jq -r . 00:06:19.153 [2024-06-09 22:48:47.317966] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:19.153 [2024-06-09 22:48:47.318056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893825 ] 00:06:19.414 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.414 [2024-06-09 22:48:47.379058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.414 [2024-06-09 22:48:47.444067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.799 22:48:48 -- accel/accel.sh@18 -- # out=' 00:06:20.799 SPDK Configuration: 00:06:20.799 Core mask: 0x1 00:06:20.799 00:06:20.799 Accel Perf Configuration: 00:06:20.799 Workload Type: copy 00:06:20.799 Transfer size: 4096 bytes 00:06:20.799 Vector count 1 00:06:20.799 Module: software 00:06:20.799 Queue depth: 32 00:06:20.799 Allocate depth: 32 00:06:20.799 # threads/core: 1 00:06:20.799 Run time: 1 seconds 00:06:20.799 Verify: Yes 00:06:20.799 00:06:20.799 Running for 1 seconds... 00:06:20.799 00:06:20.799 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.799 ------------------------------------------------------------------------------------ 00:06:20.799 0,0 305376/s 1192 MiB/s 0 0 00:06:20.799 ==================================================================================== 00:06:20.799 Total 305376/s 1192 MiB/s 0 0' 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:20.799 22:48:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:20.799 22:48:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.799 22:48:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.799 22:48:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.799 22:48:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.799 22:48:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.799 22:48:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.799 22:48:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.799 22:48:48 -- accel/accel.sh@42 -- # jq -r . 00:06:20.799 [2024-06-09 22:48:48.595300] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:20.799 [2024-06-09 22:48:48.595369] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894145 ] 00:06:20.799 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.799 [2024-06-09 22:48:48.654518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.799 [2024-06-09 22:48:48.716619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val= 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val= 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val=0x1 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val= 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val= 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val=copy 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val= 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val=software 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val=32 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val=32 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val=1 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val=Yes 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val= 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:20.799 22:48:48 -- accel/accel.sh@21 -- # val= 00:06:20.799 22:48:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # IFS=: 00:06:20.799 22:48:48 -- accel/accel.sh@20 -- # read -r var val 00:06:21.742 22:48:49 -- accel/accel.sh@21 -- # val= 00:06:21.742 22:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.742 22:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:21.742 22:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:21.742 22:48:49 -- accel/accel.sh@21 -- # val= 00:06:21.742 22:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.742 22:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:21.742 22:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:21.742 22:48:49 -- accel/accel.sh@21 -- # val= 00:06:21.742 22:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.742 22:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:21.742 22:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:21.742 22:48:49 -- accel/accel.sh@21 -- # val= 00:06:21.742 22:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.742 22:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:21.742 22:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:21.742 22:48:49 -- accel/accel.sh@21 -- # val= 00:06:21.742 22:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.742 22:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:21.742 22:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:21.742 22:48:49 -- accel/accel.sh@21 -- # val= 00:06:21.742 22:48:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.742 22:48:49 -- accel/accel.sh@20 -- # IFS=: 00:06:21.742 22:48:49 -- accel/accel.sh@20 -- # read -r var val 00:06:21.742 22:48:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:21.742 22:48:49 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:21.742 22:48:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.742 00:06:21.742 real 0m2.554s 00:06:21.742 user 0m2.360s 00:06:21.742 sys 0m0.200s 00:06:21.742 22:48:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.742 22:48:49 -- common/autotest_common.sh@10 -- # set +x 00:06:21.742 ************************************ 00:06:21.742 END TEST accel_copy 00:06:21.742 ************************************ 00:06:21.742 22:48:49 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:21.742 22:48:49 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:21.742 22:48:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.742 22:48:49 -- common/autotest_common.sh@10 -- # set +x 00:06:21.742 ************************************ 00:06:21.742 START TEST accel_fill 00:06:21.742 ************************************ 00:06:21.742 22:48:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:21.742 22:48:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.742 22:48:49 -- accel/accel.sh@17 -- # local accel_module 00:06:21.742 22:48:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:21.742 22:48:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:21.742 22:48:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.742 22:48:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.742 22:48:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.742 22:48:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.743 22:48:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.743 22:48:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.743 22:48:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.743 22:48:49 -- accel/accel.sh@42 -- # jq -r . 00:06:21.743 [2024-06-09 22:48:49.917979] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:21.743 [2024-06-09 22:48:49.918086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894317 ] 00:06:22.005 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.005 [2024-06-09 22:48:49.978958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.005 [2024-06-09 22:48:50.045658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.390 22:48:51 -- accel/accel.sh@18 -- # out=' 00:06:23.390 SPDK Configuration: 00:06:23.390 Core mask: 0x1 00:06:23.390 00:06:23.390 Accel Perf Configuration: 00:06:23.390 Workload Type: fill 00:06:23.390 Fill pattern: 0x80 00:06:23.390 Transfer size: 4096 bytes 00:06:23.390 Vector count 1 00:06:23.390 Module: software 00:06:23.390 Queue depth: 64 00:06:23.390 Allocate depth: 64 00:06:23.390 # threads/core: 1 00:06:23.391 Run time: 1 seconds 00:06:23.391 Verify: Yes 00:06:23.391 00:06:23.391 Running for 1 seconds... 00:06:23.391 00:06:23.391 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.391 ------------------------------------------------------------------------------------ 00:06:23.391 0,0 468992/s 1832 MiB/s 0 0 00:06:23.391 ==================================================================================== 00:06:23.391 Total 468992/s 1832 MiB/s 0 0' 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.391 22:48:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.391 22:48:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.391 22:48:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.391 22:48:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.391 22:48:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.391 22:48:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.391 22:48:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.391 22:48:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.391 22:48:51 -- accel/accel.sh@42 -- # jq -r . 00:06:23.391 [2024-06-09 22:48:51.196665] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:23.391 [2024-06-09 22:48:51.196754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894527 ] 00:06:23.391 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.391 [2024-06-09 22:48:51.257239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.391 [2024-06-09 22:48:51.320272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val= 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val= 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val=0x1 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val= 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val= 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val=fill 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val=0x80 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val= 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val=software 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val=64 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val=64 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val=1 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val=Yes 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val= 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:23.391 22:48:51 -- accel/accel.sh@21 -- # val= 00:06:23.391 22:48:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # IFS=: 00:06:23.391 22:48:51 -- accel/accel.sh@20 -- # read -r var val 00:06:24.334 22:48:52 -- accel/accel.sh@21 -- # val= 00:06:24.334 22:48:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.334 22:48:52 -- accel/accel.sh@20 -- # IFS=: 00:06:24.334 22:48:52 -- accel/accel.sh@20 -- # read -r var val 00:06:24.334 22:48:52 -- accel/accel.sh@21 -- # val= 00:06:24.334 22:48:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.334 22:48:52 -- accel/accel.sh@20 -- # IFS=: 00:06:24.334 22:48:52 -- accel/accel.sh@20 -- # read -r var val 00:06:24.334 22:48:52 -- accel/accel.sh@21 -- # val= 00:06:24.334 22:48:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.334 22:48:52 -- accel/accel.sh@20 -- # IFS=: 00:06:24.334 22:48:52 -- accel/accel.sh@20 -- # read -r var val 00:06:24.334 22:48:52 -- accel/accel.sh@21 -- # val= 00:06:24.334 22:48:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.334 22:48:52 -- accel/accel.sh@20 -- # IFS=: 00:06:24.334 22:48:52 -- accel/accel.sh@20 -- # read -r var val 00:06:24.334 22:48:52 -- accel/accel.sh@21 -- # val= 00:06:24.334 22:48:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.334 22:48:52 -- accel/accel.sh@20 -- # IFS=: 00:06:24.334 22:48:52 -- accel/accel.sh@20 -- # read -r var val 00:06:24.334 22:48:52 -- accel/accel.sh@21 -- # val= 00:06:24.334 22:48:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.334 22:48:52 -- accel/accel.sh@20 -- # IFS=: 00:06:24.334 22:48:52 -- accel/accel.sh@20 -- # read -r var val 00:06:24.334 22:48:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:24.334 22:48:52 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:24.334 22:48:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.334 00:06:24.334 real 0m2.560s 00:06:24.334 user 0m2.370s 00:06:24.334 sys 0m0.196s 00:06:24.334 22:48:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.334 22:48:52 -- common/autotest_common.sh@10 -- # set +x 00:06:24.334 ************************************ 00:06:24.334 END TEST accel_fill 00:06:24.334 ************************************ 00:06:24.334 22:48:52 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:24.334 22:48:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:24.334 22:48:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.334 22:48:52 -- common/autotest_common.sh@10 -- # set +x 00:06:24.334 ************************************ 00:06:24.334 START TEST accel_copy_crc32c 00:06:24.334 ************************************ 00:06:24.334 22:48:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:24.334 22:48:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.334 22:48:52 -- accel/accel.sh@17 -- # local accel_module 00:06:24.334 22:48:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:24.334 22:48:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:24.334 22:48:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.334 22:48:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.334 22:48:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.334 22:48:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.334 22:48:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.334 22:48:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.334 22:48:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.334 22:48:52 -- accel/accel.sh@42 -- # jq -r . 00:06:24.595 [2024-06-09 22:48:52.519475] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:24.595 [2024-06-09 22:48:52.519547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3894884 ] 00:06:24.595 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.595 [2024-06-09 22:48:52.579417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.595 [2024-06-09 22:48:52.642221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.981 22:48:53 -- accel/accel.sh@18 -- # out=' 00:06:25.981 SPDK Configuration: 00:06:25.981 Core mask: 0x1 00:06:25.981 00:06:25.981 Accel Perf Configuration: 00:06:25.981 Workload Type: copy_crc32c 00:06:25.981 CRC-32C seed: 0 00:06:25.981 Vector size: 4096 bytes 00:06:25.981 Transfer size: 4096 bytes 00:06:25.981 Vector count 1 00:06:25.981 Module: software 00:06:25.981 Queue depth: 32 00:06:25.981 Allocate depth: 32 00:06:25.981 # threads/core: 1 00:06:25.981 Run time: 1 seconds 00:06:25.981 Verify: Yes 00:06:25.981 00:06:25.981 Running for 1 seconds... 00:06:25.981 00:06:25.981 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.981 ------------------------------------------------------------------------------------ 00:06:25.981 0,0 248480/s 970 MiB/s 0 0 00:06:25.981 ==================================================================================== 00:06:25.981 Total 248480/s 970 MiB/s 0 0' 00:06:25.981 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.981 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.981 22:48:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:25.981 22:48:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:25.981 22:48:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.981 22:48:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.981 22:48:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.981 22:48:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.981 22:48:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.981 22:48:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.981 22:48:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.981 22:48:53 -- accel/accel.sh@42 -- # jq -r . 00:06:25.981 [2024-06-09 22:48:53.794323] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:25.981 [2024-06-09 22:48:53.794434] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895221 ] 00:06:25.981 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.981 [2024-06-09 22:48:53.854523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.981 [2024-06-09 22:48:53.915759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.981 22:48:53 -- accel/accel.sh@21 -- # val= 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val= 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val=0x1 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val= 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val= 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val=0 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val= 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val=software 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val=32 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val=32 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val=1 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val=Yes 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val= 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:25.982 22:48:53 -- accel/accel.sh@21 -- # val= 00:06:25.982 22:48:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # IFS=: 00:06:25.982 22:48:53 -- accel/accel.sh@20 -- # read -r var val 00:06:26.926 22:48:55 -- accel/accel.sh@21 -- # val= 00:06:26.926 22:48:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.926 22:48:55 -- accel/accel.sh@20 -- # IFS=: 00:06:26.926 22:48:55 -- accel/accel.sh@20 -- # read -r var val 00:06:26.926 22:48:55 -- accel/accel.sh@21 -- # val= 00:06:26.926 22:48:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.926 22:48:55 -- accel/accel.sh@20 -- # IFS=: 00:06:26.926 22:48:55 -- accel/accel.sh@20 -- # read -r var val 00:06:26.926 22:48:55 -- accel/accel.sh@21 -- # val= 00:06:26.926 22:48:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.926 22:48:55 -- accel/accel.sh@20 -- # IFS=: 00:06:26.926 22:48:55 -- accel/accel.sh@20 -- # read -r var val 00:06:26.926 22:48:55 -- accel/accel.sh@21 -- # val= 00:06:26.926 22:48:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.926 22:48:55 -- accel/accel.sh@20 -- # IFS=: 00:06:26.926 22:48:55 -- accel/accel.sh@20 -- # read -r var val 00:06:26.926 22:48:55 -- accel/accel.sh@21 -- # val= 00:06:26.926 22:48:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.926 22:48:55 -- accel/accel.sh@20 -- # IFS=: 00:06:26.926 22:48:55 -- accel/accel.sh@20 -- # read -r var val 00:06:26.926 22:48:55 -- accel/accel.sh@21 -- # val= 00:06:26.926 22:48:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.926 22:48:55 -- accel/accel.sh@20 -- # IFS=: 00:06:26.926 22:48:55 -- accel/accel.sh@20 -- # read -r var val 00:06:26.926 22:48:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.926 22:48:55 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:26.926 22:48:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.926 00:06:26.926 real 0m2.552s 00:06:26.926 user 0m2.362s 00:06:26.926 sys 0m0.197s 00:06:26.926 22:48:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.926 22:48:55 -- common/autotest_common.sh@10 -- # set +x 00:06:26.926 ************************************ 00:06:26.926 END TEST accel_copy_crc32c 00:06:26.926 ************************************ 00:06:26.926 22:48:55 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:26.926 22:48:55 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:26.926 22:48:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.926 22:48:55 -- common/autotest_common.sh@10 -- # set +x 00:06:26.926 ************************************ 00:06:26.926 START TEST accel_copy_crc32c_C2 00:06:26.926 ************************************ 00:06:26.926 22:48:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:26.926 22:48:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.926 22:48:55 -- accel/accel.sh@17 -- # local accel_module 00:06:26.926 22:48:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:26.926 22:48:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:26.926 22:48:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.926 22:48:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.926 22:48:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.926 22:48:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.926 22:48:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.926 22:48:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.926 22:48:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.926 22:48:55 -- accel/accel.sh@42 -- # jq -r . 00:06:27.188 [2024-06-09 22:48:55.118480] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:27.188 [2024-06-09 22:48:55.118589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895461 ] 00:06:27.188 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.188 [2024-06-09 22:48:55.180163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.188 [2024-06-09 22:48:55.245488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.576 22:48:56 -- accel/accel.sh@18 -- # out=' 00:06:28.576 SPDK Configuration: 00:06:28.576 Core mask: 0x1 00:06:28.576 00:06:28.576 Accel Perf Configuration: 00:06:28.576 Workload Type: copy_crc32c 00:06:28.576 CRC-32C seed: 0 00:06:28.576 Vector size: 4096 bytes 00:06:28.576 Transfer size: 8192 bytes 00:06:28.576 Vector count 2 00:06:28.576 Module: software 00:06:28.576 Queue depth: 32 00:06:28.576 Allocate depth: 32 00:06:28.576 # threads/core: 1 00:06:28.576 Run time: 1 seconds 00:06:28.576 Verify: Yes 00:06:28.576 00:06:28.576 Running for 1 seconds... 00:06:28.576 00:06:28.576 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:28.576 ------------------------------------------------------------------------------------ 00:06:28.576 0,0 184992/s 1445 MiB/s 0 0 00:06:28.576 ==================================================================================== 00:06:28.576 Total 184992/s 722 MiB/s 0 0' 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:28.576 22:48:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:28.576 22:48:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.576 22:48:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.576 22:48:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.576 22:48:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.576 22:48:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.576 22:48:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.576 22:48:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.576 22:48:56 -- accel/accel.sh@42 -- # jq -r . 00:06:28.576 [2024-06-09 22:48:56.396735] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:28.576 [2024-06-09 22:48:56.396807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895609 ] 00:06:28.576 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.576 [2024-06-09 22:48:56.457633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.576 [2024-06-09 22:48:56.520215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val= 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val= 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val=0x1 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val= 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val= 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val=0 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val= 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val=software 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val=32 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val=32 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val=1 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val=Yes 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val= 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:28.576 22:48:56 -- accel/accel.sh@21 -- # val= 00:06:28.576 22:48:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # IFS=: 00:06:28.576 22:48:56 -- accel/accel.sh@20 -- # read -r var val 00:06:29.520 22:48:57 -- accel/accel.sh@21 -- # val= 00:06:29.520 22:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.520 22:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:29.520 22:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:29.520 22:48:57 -- accel/accel.sh@21 -- # val= 00:06:29.520 22:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.520 22:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:29.520 22:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:29.520 22:48:57 -- accel/accel.sh@21 -- # val= 00:06:29.520 22:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.520 22:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:29.520 22:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:29.520 22:48:57 -- accel/accel.sh@21 -- # val= 00:06:29.520 22:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.520 22:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:29.520 22:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:29.520 22:48:57 -- accel/accel.sh@21 -- # val= 00:06:29.520 22:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.520 22:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:29.520 22:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:29.520 22:48:57 -- accel/accel.sh@21 -- # val= 00:06:29.520 22:48:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.520 22:48:57 -- accel/accel.sh@20 -- # IFS=: 00:06:29.520 22:48:57 -- accel/accel.sh@20 -- # read -r var val 00:06:29.520 22:48:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.520 22:48:57 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:29.520 22:48:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.520 00:06:29.520 real 0m2.560s 00:06:29.520 user 0m2.375s 00:06:29.520 sys 0m0.193s 00:06:29.520 22:48:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.520 22:48:57 -- common/autotest_common.sh@10 -- # set +x 00:06:29.520 ************************************ 00:06:29.520 END TEST accel_copy_crc32c_C2 00:06:29.520 ************************************ 00:06:29.520 22:48:57 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:29.520 22:48:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:29.520 22:48:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.520 22:48:57 -- common/autotest_common.sh@10 -- # set +x 00:06:29.520 ************************************ 00:06:29.520 START TEST accel_dualcast 00:06:29.520 ************************************ 00:06:29.520 22:48:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:29.520 22:48:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.520 22:48:57 -- accel/accel.sh@17 -- # local accel_module 00:06:29.520 22:48:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:29.520 22:48:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:29.520 22:48:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.520 22:48:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.520 22:48:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.520 22:48:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.520 22:48:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.520 22:48:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.520 22:48:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.782 22:48:57 -- accel/accel.sh@42 -- # jq -r . 00:06:29.782 [2024-06-09 22:48:57.719126] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:29.782 [2024-06-09 22:48:57.719217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895940 ] 00:06:29.782 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.782 [2024-06-09 22:48:57.788001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.782 [2024-06-09 22:48:57.851370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.170 22:48:58 -- accel/accel.sh@18 -- # out=' 00:06:31.170 SPDK Configuration: 00:06:31.170 Core mask: 0x1 00:06:31.170 00:06:31.170 Accel Perf Configuration: 00:06:31.170 Workload Type: dualcast 00:06:31.170 Transfer size: 4096 bytes 00:06:31.170 Vector count 1 00:06:31.170 Module: software 00:06:31.170 Queue depth: 32 00:06:31.170 Allocate depth: 32 00:06:31.170 # threads/core: 1 00:06:31.170 Run time: 1 seconds 00:06:31.170 Verify: Yes 00:06:31.170 00:06:31.170 Running for 1 seconds... 00:06:31.170 00:06:31.170 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.170 ------------------------------------------------------------------------------------ 00:06:31.170 0,0 361952/s 1413 MiB/s 0 0 00:06:31.170 ==================================================================================== 00:06:31.170 Total 361952/s 1413 MiB/s 0 0' 00:06:31.170 22:48:58 -- accel/accel.sh@20 -- # IFS=: 00:06:31.170 22:48:58 -- accel/accel.sh@20 -- # read -r var val 00:06:31.170 22:48:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:31.170 22:48:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:31.170 22:48:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.170 22:48:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.170 22:48:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.170 22:48:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.170 22:48:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.171 22:48:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.171 22:48:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.171 22:48:58 -- accel/accel.sh@42 -- # jq -r . 00:06:31.171 [2024-06-09 22:48:59.003506] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:31.171 [2024-06-09 22:48:59.003607] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896276 ] 00:06:31.171 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.171 [2024-06-09 22:48:59.064495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.171 [2024-06-09 22:48:59.127265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val= 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val= 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val=0x1 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val= 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val= 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val=dualcast 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val= 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val=software 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val=32 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val=32 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val=1 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val=Yes 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val= 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:31.171 22:48:59 -- accel/accel.sh@21 -- # val= 00:06:31.171 22:48:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # IFS=: 00:06:31.171 22:48:59 -- accel/accel.sh@20 -- # read -r var val 00:06:32.116 22:49:00 -- accel/accel.sh@21 -- # val= 00:06:32.116 22:49:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.116 22:49:00 -- accel/accel.sh@20 -- # IFS=: 00:06:32.116 22:49:00 -- accel/accel.sh@20 -- # read -r var val 00:06:32.116 22:49:00 -- accel/accel.sh@21 -- # val= 00:06:32.116 22:49:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.116 22:49:00 -- accel/accel.sh@20 -- # IFS=: 00:06:32.116 22:49:00 -- accel/accel.sh@20 -- # read -r var val 00:06:32.116 22:49:00 -- accel/accel.sh@21 -- # val= 00:06:32.116 22:49:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.116 22:49:00 -- accel/accel.sh@20 -- # IFS=: 00:06:32.116 22:49:00 -- accel/accel.sh@20 -- # read -r var val 00:06:32.116 22:49:00 -- accel/accel.sh@21 -- # val= 00:06:32.116 22:49:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.116 22:49:00 -- accel/accel.sh@20 -- # IFS=: 00:06:32.116 22:49:00 -- accel/accel.sh@20 -- # read -r var val 00:06:32.116 22:49:00 -- accel/accel.sh@21 -- # val= 00:06:32.116 22:49:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.116 22:49:00 -- accel/accel.sh@20 -- # IFS=: 00:06:32.116 22:49:00 -- accel/accel.sh@20 -- # read -r var val 00:06:32.116 22:49:00 -- accel/accel.sh@21 -- # val= 00:06:32.116 22:49:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.116 22:49:00 -- accel/accel.sh@20 -- # IFS=: 00:06:32.116 22:49:00 -- accel/accel.sh@20 -- # read -r var val 00:06:32.116 22:49:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.116 22:49:00 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:32.116 22:49:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.116 00:06:32.116 real 0m2.564s 00:06:32.116 user 0m2.361s 00:06:32.116 sys 0m0.208s 00:06:32.117 22:49:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.117 22:49:00 -- common/autotest_common.sh@10 -- # set +x 00:06:32.117 ************************************ 00:06:32.117 END TEST accel_dualcast 00:06:32.117 ************************************ 00:06:32.117 22:49:00 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:32.117 22:49:00 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:32.117 22:49:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.117 22:49:00 -- common/autotest_common.sh@10 -- # set +x 00:06:32.378 ************************************ 00:06:32.378 START TEST accel_compare 00:06:32.378 ************************************ 00:06:32.378 22:49:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:32.378 22:49:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.378 22:49:00 -- accel/accel.sh@17 -- # local accel_module 00:06:32.378 22:49:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:32.378 22:49:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:32.378 22:49:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.378 22:49:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.378 22:49:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.378 22:49:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.378 22:49:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.378 22:49:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.378 22:49:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.378 22:49:00 -- accel/accel.sh@42 -- # jq -r . 00:06:32.378 [2024-06-09 22:49:00.328074] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:32.378 [2024-06-09 22:49:00.328185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896606 ] 00:06:32.378 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.378 [2024-06-09 22:49:00.398706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.378 [2024-06-09 22:49:00.463693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.767 22:49:01 -- accel/accel.sh@18 -- # out=' 00:06:33.767 SPDK Configuration: 00:06:33.767 Core mask: 0x1 00:06:33.767 00:06:33.767 Accel Perf Configuration: 00:06:33.767 Workload Type: compare 00:06:33.767 Transfer size: 4096 bytes 00:06:33.767 Vector count 1 00:06:33.767 Module: software 00:06:33.767 Queue depth: 32 00:06:33.767 Allocate depth: 32 00:06:33.767 # threads/core: 1 00:06:33.767 Run time: 1 seconds 00:06:33.767 Verify: Yes 00:06:33.767 00:06:33.767 Running for 1 seconds... 00:06:33.767 00:06:33.767 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:33.767 ------------------------------------------------------------------------------------ 00:06:33.767 0,0 435616/s 1701 MiB/s 0 0 00:06:33.767 ==================================================================================== 00:06:33.767 Total 435616/s 1701 MiB/s 0 0' 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:33.767 22:49:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:33.767 22:49:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.767 22:49:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.767 22:49:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.767 22:49:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.767 22:49:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.767 22:49:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.767 22:49:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.767 22:49:01 -- accel/accel.sh@42 -- # jq -r . 00:06:33.767 [2024-06-09 22:49:01.614857] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:33.767 [2024-06-09 22:49:01.614926] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896748 ] 00:06:33.767 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.767 [2024-06-09 22:49:01.673865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.767 [2024-06-09 22:49:01.735978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val= 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val= 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val=0x1 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val= 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val= 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val=compare 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val= 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val=software 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val=32 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val=32 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val=1 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val=Yes 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val= 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:33.767 22:49:01 -- accel/accel.sh@21 -- # val= 00:06:33.767 22:49:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # IFS=: 00:06:33.767 22:49:01 -- accel/accel.sh@20 -- # read -r var val 00:06:34.709 22:49:02 -- accel/accel.sh@21 -- # val= 00:06:34.709 22:49:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.709 22:49:02 -- accel/accel.sh@20 -- # IFS=: 00:06:34.709 22:49:02 -- accel/accel.sh@20 -- # read -r var val 00:06:34.709 22:49:02 -- accel/accel.sh@21 -- # val= 00:06:34.709 22:49:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.709 22:49:02 -- accel/accel.sh@20 -- # IFS=: 00:06:34.709 22:49:02 -- accel/accel.sh@20 -- # read -r var val 00:06:34.709 22:49:02 -- accel/accel.sh@21 -- # val= 00:06:34.709 22:49:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.709 22:49:02 -- accel/accel.sh@20 -- # IFS=: 00:06:34.709 22:49:02 -- accel/accel.sh@20 -- # read -r var val 00:06:34.709 22:49:02 -- accel/accel.sh@21 -- # val= 00:06:34.709 22:49:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.709 22:49:02 -- accel/accel.sh@20 -- # IFS=: 00:06:34.709 22:49:02 -- accel/accel.sh@20 -- # read -r var val 00:06:34.709 22:49:02 -- accel/accel.sh@21 -- # val= 00:06:34.709 22:49:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.709 22:49:02 -- accel/accel.sh@20 -- # IFS=: 00:06:34.709 22:49:02 -- accel/accel.sh@20 -- # read -r var val 00:06:34.709 22:49:02 -- accel/accel.sh@21 -- # val= 00:06:34.709 22:49:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.709 22:49:02 -- accel/accel.sh@20 -- # IFS=: 00:06:34.709 22:49:02 -- accel/accel.sh@20 -- # read -r var val 00:06:34.709 22:49:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:34.709 22:49:02 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:34.709 22:49:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.710 00:06:34.710 real 0m2.565s 00:06:34.710 user 0m2.368s 00:06:34.710 sys 0m0.203s 00:06:34.710 22:49:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.710 22:49:02 -- common/autotest_common.sh@10 -- # set +x 00:06:34.710 ************************************ 00:06:34.710 END TEST accel_compare 00:06:34.710 ************************************ 00:06:34.971 22:49:02 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:34.971 22:49:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:34.971 22:49:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.971 22:49:02 -- common/autotest_common.sh@10 -- # set +x 00:06:34.971 ************************************ 00:06:34.971 START TEST accel_xor 00:06:34.971 ************************************ 00:06:34.971 22:49:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:34.971 22:49:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.971 22:49:02 -- accel/accel.sh@17 -- # local accel_module 00:06:34.971 22:49:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:34.971 22:49:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:34.971 22:49:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.971 22:49:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.971 22:49:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.971 22:49:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.971 22:49:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.971 22:49:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.971 22:49:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.971 22:49:02 -- accel/accel.sh@42 -- # jq -r . 00:06:34.971 [2024-06-09 22:49:02.936159] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:34.971 [2024-06-09 22:49:02.936274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897002 ] 00:06:34.971 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.971 [2024-06-09 22:49:03.006116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.971 [2024-06-09 22:49:03.070863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.357 22:49:04 -- accel/accel.sh@18 -- # out=' 00:06:36.357 SPDK Configuration: 00:06:36.357 Core mask: 0x1 00:06:36.357 00:06:36.357 Accel Perf Configuration: 00:06:36.357 Workload Type: xor 00:06:36.357 Source buffers: 2 00:06:36.357 Transfer size: 4096 bytes 00:06:36.357 Vector count 1 00:06:36.357 Module: software 00:06:36.357 Queue depth: 32 00:06:36.357 Allocate depth: 32 00:06:36.357 # threads/core: 1 00:06:36.357 Run time: 1 seconds 00:06:36.357 Verify: Yes 00:06:36.357 00:06:36.357 Running for 1 seconds... 00:06:36.357 00:06:36.357 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.357 ------------------------------------------------------------------------------------ 00:06:36.357 0,0 360448/s 1408 MiB/s 0 0 00:06:36.357 ==================================================================================== 00:06:36.357 Total 360448/s 1408 MiB/s 0 0' 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.357 22:49:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:36.357 22:49:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:36.357 22:49:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.357 22:49:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.357 22:49:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.357 22:49:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.357 22:49:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.357 22:49:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.357 22:49:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.357 22:49:04 -- accel/accel.sh@42 -- # jq -r . 00:06:36.357 [2024-06-09 22:49:04.221186] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:36.357 [2024-06-09 22:49:04.221261] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897336 ] 00:06:36.357 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.357 [2024-06-09 22:49:04.280903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.357 [2024-06-09 22:49:04.343503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.357 22:49:04 -- accel/accel.sh@21 -- # val= 00:06:36.357 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.357 22:49:04 -- accel/accel.sh@21 -- # val= 00:06:36.357 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.357 22:49:04 -- accel/accel.sh@21 -- # val=0x1 00:06:36.357 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.357 22:49:04 -- accel/accel.sh@21 -- # val= 00:06:36.357 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.357 22:49:04 -- accel/accel.sh@21 -- # val= 00:06:36.357 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.357 22:49:04 -- accel/accel.sh@21 -- # val=xor 00:06:36.357 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.357 22:49:04 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.357 22:49:04 -- accel/accel.sh@21 -- # val=2 00:06:36.357 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.357 22:49:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.357 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.357 22:49:04 -- accel/accel.sh@21 -- # val= 00:06:36.357 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.357 22:49:04 -- accel/accel.sh@21 -- # val=software 00:06:36.357 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.357 22:49:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.357 22:49:04 -- accel/accel.sh@21 -- # val=32 00:06:36.357 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.357 22:49:04 -- accel/accel.sh@21 -- # val=32 00:06:36.357 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.357 22:49:04 -- accel/accel.sh@21 -- # val=1 00:06:36.357 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.357 22:49:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.357 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.357 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.357 22:49:04 -- accel/accel.sh@21 -- # val=Yes 00:06:36.358 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.358 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.358 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.358 22:49:04 -- accel/accel.sh@21 -- # val= 00:06:36.358 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.358 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.358 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:36.358 22:49:04 -- accel/accel.sh@21 -- # val= 00:06:36.358 22:49:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.358 22:49:04 -- accel/accel.sh@20 -- # IFS=: 00:06:36.358 22:49:04 -- accel/accel.sh@20 -- # read -r var val 00:06:37.303 22:49:05 -- accel/accel.sh@21 -- # val= 00:06:37.303 22:49:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.303 22:49:05 -- accel/accel.sh@20 -- # IFS=: 00:06:37.303 22:49:05 -- accel/accel.sh@20 -- # read -r var val 00:06:37.303 22:49:05 -- accel/accel.sh@21 -- # val= 00:06:37.303 22:49:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.303 22:49:05 -- accel/accel.sh@20 -- # IFS=: 00:06:37.303 22:49:05 -- accel/accel.sh@20 -- # read -r var val 00:06:37.303 22:49:05 -- accel/accel.sh@21 -- # val= 00:06:37.303 22:49:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.303 22:49:05 -- accel/accel.sh@20 -- # IFS=: 00:06:37.303 22:49:05 -- accel/accel.sh@20 -- # read -r var val 00:06:37.303 22:49:05 -- accel/accel.sh@21 -- # val= 00:06:37.303 22:49:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.303 22:49:05 -- accel/accel.sh@20 -- # IFS=: 00:06:37.303 22:49:05 -- accel/accel.sh@20 -- # read -r var val 00:06:37.303 22:49:05 -- accel/accel.sh@21 -- # val= 00:06:37.303 22:49:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.303 22:49:05 -- accel/accel.sh@20 -- # IFS=: 00:06:37.303 22:49:05 -- accel/accel.sh@20 -- # read -r var val 00:06:37.303 22:49:05 -- accel/accel.sh@21 -- # val= 00:06:37.303 22:49:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.303 22:49:05 -- accel/accel.sh@20 -- # IFS=: 00:06:37.303 22:49:05 -- accel/accel.sh@20 -- # read -r var val 00:06:37.303 22:49:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:37.303 22:49:05 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:37.303 22:49:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.303 00:06:37.303 real 0m2.565s 00:06:37.303 user 0m2.375s 00:06:37.303 sys 0m0.195s 00:06:37.303 22:49:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.303 22:49:05 -- common/autotest_common.sh@10 -- # set +x 00:06:37.303 ************************************ 00:06:37.303 END TEST accel_xor 00:06:37.303 ************************************ 00:06:37.599 22:49:05 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:37.599 22:49:05 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:37.599 22:49:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.599 22:49:05 -- common/autotest_common.sh@10 -- # set +x 00:06:37.599 ************************************ 00:06:37.599 START TEST accel_xor 00:06:37.599 ************************************ 00:06:37.599 22:49:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:37.599 22:49:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.599 22:49:05 -- accel/accel.sh@17 -- # local accel_module 00:06:37.599 22:49:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:37.599 22:49:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:37.599 22:49:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.599 22:49:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.599 22:49:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.599 22:49:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.599 22:49:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.599 22:49:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.599 22:49:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.599 22:49:05 -- accel/accel.sh@42 -- # jq -r . 00:06:37.599 [2024-06-09 22:49:05.542676] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:37.599 [2024-06-09 22:49:05.542767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897689 ] 00:06:37.599 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.599 [2024-06-09 22:49:05.604793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.599 [2024-06-09 22:49:05.669583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.987 22:49:06 -- accel/accel.sh@18 -- # out=' 00:06:38.987 SPDK Configuration: 00:06:38.987 Core mask: 0x1 00:06:38.987 00:06:38.987 Accel Perf Configuration: 00:06:38.987 Workload Type: xor 00:06:38.987 Source buffers: 3 00:06:38.987 Transfer size: 4096 bytes 00:06:38.987 Vector count 1 00:06:38.987 Module: software 00:06:38.987 Queue depth: 32 00:06:38.987 Allocate depth: 32 00:06:38.987 # threads/core: 1 00:06:38.987 Run time: 1 seconds 00:06:38.987 Verify: Yes 00:06:38.987 00:06:38.987 Running for 1 seconds... 00:06:38.987 00:06:38.987 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:38.987 ------------------------------------------------------------------------------------ 00:06:38.987 0,0 344480/s 1345 MiB/s 0 0 00:06:38.987 ==================================================================================== 00:06:38.987 Total 344480/s 1345 MiB/s 0 0' 00:06:38.987 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.987 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.987 22:49:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:38.987 22:49:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:38.987 22:49:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.987 22:49:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.987 22:49:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.987 22:49:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.987 22:49:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.987 22:49:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.987 22:49:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.987 22:49:06 -- accel/accel.sh@42 -- # jq -r . 00:06:38.987 [2024-06-09 22:49:06.819799] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:38.987 [2024-06-09 22:49:06.819873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897892 ] 00:06:38.987 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.987 [2024-06-09 22:49:06.880151] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.987 [2024-06-09 22:49:06.941797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.987 22:49:06 -- accel/accel.sh@21 -- # val= 00:06:38.987 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.987 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.987 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.987 22:49:06 -- accel/accel.sh@21 -- # val= 00:06:38.987 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.987 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.987 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.987 22:49:06 -- accel/accel.sh@21 -- # val=0x1 00:06:38.987 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.987 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.987 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.987 22:49:06 -- accel/accel.sh@21 -- # val= 00:06:38.987 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.987 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.987 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.987 22:49:06 -- accel/accel.sh@21 -- # val= 00:06:38.987 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.987 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.987 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.987 22:49:06 -- accel/accel.sh@21 -- # val=xor 00:06:38.987 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.987 22:49:06 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:38.987 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.987 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.987 22:49:06 -- accel/accel.sh@21 -- # val=3 00:06:38.987 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.988 22:49:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:38.988 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.988 22:49:06 -- accel/accel.sh@21 -- # val= 00:06:38.988 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.988 22:49:06 -- accel/accel.sh@21 -- # val=software 00:06:38.988 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.988 22:49:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.988 22:49:06 -- accel/accel.sh@21 -- # val=32 00:06:38.988 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.988 22:49:06 -- accel/accel.sh@21 -- # val=32 00:06:38.988 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.988 22:49:06 -- accel/accel.sh@21 -- # val=1 00:06:38.988 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.988 22:49:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:38.988 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.988 22:49:06 -- accel/accel.sh@21 -- # val=Yes 00:06:38.988 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.988 22:49:06 -- accel/accel.sh@21 -- # val= 00:06:38.988 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:38.988 22:49:06 -- accel/accel.sh@21 -- # val= 00:06:38.988 22:49:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # IFS=: 00:06:38.988 22:49:06 -- accel/accel.sh@20 -- # read -r var val 00:06:39.931 22:49:08 -- accel/accel.sh@21 -- # val= 00:06:39.931 22:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.931 22:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:39.931 22:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:39.931 22:49:08 -- accel/accel.sh@21 -- # val= 00:06:39.931 22:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.931 22:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:39.931 22:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:39.931 22:49:08 -- accel/accel.sh@21 -- # val= 00:06:39.931 22:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.931 22:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:39.931 22:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:39.931 22:49:08 -- accel/accel.sh@21 -- # val= 00:06:39.931 22:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.931 22:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:39.931 22:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:39.931 22:49:08 -- accel/accel.sh@21 -- # val= 00:06:39.931 22:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.931 22:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:39.931 22:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:39.931 22:49:08 -- accel/accel.sh@21 -- # val= 00:06:39.932 22:49:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.932 22:49:08 -- accel/accel.sh@20 -- # IFS=: 00:06:39.932 22:49:08 -- accel/accel.sh@20 -- # read -r var val 00:06:39.932 22:49:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:39.932 22:49:08 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:39.932 22:49:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.932 00:06:39.932 real 0m2.555s 00:06:39.932 user 0m2.366s 00:06:39.932 sys 0m0.195s 00:06:39.932 22:49:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.932 22:49:08 -- common/autotest_common.sh@10 -- # set +x 00:06:39.932 ************************************ 00:06:39.932 END TEST accel_xor 00:06:39.932 ************************************ 00:06:39.932 22:49:08 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:39.932 22:49:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:39.932 22:49:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.932 22:49:08 -- common/autotest_common.sh@10 -- # set +x 00:06:40.193 ************************************ 00:06:40.193 START TEST accel_dif_verify 00:06:40.193 ************************************ 00:06:40.193 22:49:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:40.193 22:49:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.193 22:49:08 -- accel/accel.sh@17 -- # local accel_module 00:06:40.193 22:49:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:40.193 22:49:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:40.193 22:49:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.193 22:49:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.193 22:49:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.193 22:49:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.193 22:49:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.193 22:49:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.193 22:49:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.193 22:49:08 -- accel/accel.sh@42 -- # jq -r . 00:06:40.193 [2024-06-09 22:49:08.141413] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:40.193 [2024-06-09 22:49:08.141497] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898081 ] 00:06:40.193 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.193 [2024-06-09 22:49:08.203945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.193 [2024-06-09 22:49:08.269488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.581 22:49:09 -- accel/accel.sh@18 -- # out=' 00:06:41.581 SPDK Configuration: 00:06:41.581 Core mask: 0x1 00:06:41.581 00:06:41.581 Accel Perf Configuration: 00:06:41.581 Workload Type: dif_verify 00:06:41.581 Vector size: 4096 bytes 00:06:41.581 Transfer size: 4096 bytes 00:06:41.581 Block size: 512 bytes 00:06:41.581 Metadata size: 8 bytes 00:06:41.581 Vector count 1 00:06:41.581 Module: software 00:06:41.581 Queue depth: 32 00:06:41.581 Allocate depth: 32 00:06:41.581 # threads/core: 1 00:06:41.581 Run time: 1 seconds 00:06:41.581 Verify: No 00:06:41.581 00:06:41.581 Running for 1 seconds... 00:06:41.581 00:06:41.581 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:41.581 ------------------------------------------------------------------------------------ 00:06:41.581 0,0 94944/s 376 MiB/s 0 0 00:06:41.581 ==================================================================================== 00:06:41.581 Total 94944/s 370 MiB/s 0 0' 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:41.581 22:49:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:41.581 22:49:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.581 22:49:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.581 22:49:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.581 22:49:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.581 22:49:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.581 22:49:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.581 22:49:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.581 22:49:09 -- accel/accel.sh@42 -- # jq -r . 00:06:41.581 [2024-06-09 22:49:09.422053] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:41.581 [2024-06-09 22:49:09.422131] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898400 ] 00:06:41.581 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.581 [2024-06-09 22:49:09.481864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.581 [2024-06-09 22:49:09.543491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val= 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val= 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val=0x1 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val= 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val= 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val=dif_verify 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val= 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val=software 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val=32 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val=32 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val=1 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val=No 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val= 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:41.581 22:49:09 -- accel/accel.sh@21 -- # val= 00:06:41.581 22:49:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # IFS=: 00:06:41.581 22:49:09 -- accel/accel.sh@20 -- # read -r var val 00:06:42.526 22:49:10 -- accel/accel.sh@21 -- # val= 00:06:42.526 22:49:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.526 22:49:10 -- accel/accel.sh@20 -- # IFS=: 00:06:42.526 22:49:10 -- accel/accel.sh@20 -- # read -r var val 00:06:42.526 22:49:10 -- accel/accel.sh@21 -- # val= 00:06:42.526 22:49:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.526 22:49:10 -- accel/accel.sh@20 -- # IFS=: 00:06:42.526 22:49:10 -- accel/accel.sh@20 -- # read -r var val 00:06:42.526 22:49:10 -- accel/accel.sh@21 -- # val= 00:06:42.526 22:49:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.526 22:49:10 -- accel/accel.sh@20 -- # IFS=: 00:06:42.526 22:49:10 -- accel/accel.sh@20 -- # read -r var val 00:06:42.526 22:49:10 -- accel/accel.sh@21 -- # val= 00:06:42.526 22:49:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.526 22:49:10 -- accel/accel.sh@20 -- # IFS=: 00:06:42.526 22:49:10 -- accel/accel.sh@20 -- # read -r var val 00:06:42.526 22:49:10 -- accel/accel.sh@21 -- # val= 00:06:42.526 22:49:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.526 22:49:10 -- accel/accel.sh@20 -- # IFS=: 00:06:42.526 22:49:10 -- accel/accel.sh@20 -- # read -r var val 00:06:42.526 22:49:10 -- accel/accel.sh@21 -- # val= 00:06:42.526 22:49:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.526 22:49:10 -- accel/accel.sh@20 -- # IFS=: 00:06:42.526 22:49:10 -- accel/accel.sh@20 -- # read -r var val 00:06:42.526 22:49:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.526 22:49:10 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:42.526 22:49:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.526 00:06:42.526 real 0m2.559s 00:06:42.526 user 0m2.372s 00:06:42.526 sys 0m0.195s 00:06:42.526 22:49:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.526 22:49:10 -- common/autotest_common.sh@10 -- # set +x 00:06:42.526 ************************************ 00:06:42.526 END TEST accel_dif_verify 00:06:42.526 ************************************ 00:06:42.787 22:49:10 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:42.788 22:49:10 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:42.788 22:49:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.788 22:49:10 -- common/autotest_common.sh@10 -- # set +x 00:06:42.788 ************************************ 00:06:42.788 START TEST accel_dif_generate 00:06:42.788 ************************************ 00:06:42.788 22:49:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:42.788 22:49:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.788 22:49:10 -- accel/accel.sh@17 -- # local accel_module 00:06:42.788 22:49:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:42.788 22:49:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:42.788 22:49:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.788 22:49:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.788 22:49:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.788 22:49:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.788 22:49:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.788 22:49:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.788 22:49:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.788 22:49:10 -- accel/accel.sh@42 -- # jq -r . 00:06:42.788 [2024-06-09 22:49:10.744776] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:42.788 [2024-06-09 22:49:10.744853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898755 ] 00:06:42.788 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.788 [2024-06-09 22:49:10.805805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.788 [2024-06-09 22:49:10.871184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.174 22:49:11 -- accel/accel.sh@18 -- # out=' 00:06:44.174 SPDK Configuration: 00:06:44.174 Core mask: 0x1 00:06:44.174 00:06:44.174 Accel Perf Configuration: 00:06:44.174 Workload Type: dif_generate 00:06:44.174 Vector size: 4096 bytes 00:06:44.174 Transfer size: 4096 bytes 00:06:44.174 Block size: 512 bytes 00:06:44.174 Metadata size: 8 bytes 00:06:44.174 Vector count 1 00:06:44.174 Module: software 00:06:44.174 Queue depth: 32 00:06:44.174 Allocate depth: 32 00:06:44.174 # threads/core: 1 00:06:44.174 Run time: 1 seconds 00:06:44.174 Verify: No 00:06:44.174 00:06:44.174 Running for 1 seconds... 00:06:44.174 00:06:44.174 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.174 ------------------------------------------------------------------------------------ 00:06:44.174 0,0 113056/s 448 MiB/s 0 0 00:06:44.174 ==================================================================================== 00:06:44.174 Total 113056/s 441 MiB/s 0 0' 00:06:44.174 22:49:11 -- accel/accel.sh@20 -- # IFS=: 00:06:44.174 22:49:11 -- accel/accel.sh@20 -- # read -r var val 00:06:44.174 22:49:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:44.174 22:49:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:44.174 22:49:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.174 22:49:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.174 22:49:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.174 22:49:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.174 22:49:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.174 22:49:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.174 22:49:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.174 22:49:11 -- accel/accel.sh@42 -- # jq -r . 00:06:44.174 [2024-06-09 22:49:12.023124] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:44.174 [2024-06-09 22:49:12.023225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899067 ] 00:06:44.174 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.174 [2024-06-09 22:49:12.083488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.174 [2024-06-09 22:49:12.145722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.174 22:49:12 -- accel/accel.sh@21 -- # val= 00:06:44.174 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.174 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.174 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.174 22:49:12 -- accel/accel.sh@21 -- # val= 00:06:44.174 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.174 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.174 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.174 22:49:12 -- accel/accel.sh@21 -- # val=0x1 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val= 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val= 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val=dif_generate 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val= 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val=software 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val=32 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val=32 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val=1 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val=No 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val= 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:44.175 22:49:12 -- accel/accel.sh@21 -- # val= 00:06:44.175 22:49:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # IFS=: 00:06:44.175 22:49:12 -- accel/accel.sh@20 -- # read -r var val 00:06:45.116 22:49:13 -- accel/accel.sh@21 -- # val= 00:06:45.116 22:49:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.116 22:49:13 -- accel/accel.sh@20 -- # IFS=: 00:06:45.116 22:49:13 -- accel/accel.sh@20 -- # read -r var val 00:06:45.116 22:49:13 -- accel/accel.sh@21 -- # val= 00:06:45.116 22:49:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.116 22:49:13 -- accel/accel.sh@20 -- # IFS=: 00:06:45.116 22:49:13 -- accel/accel.sh@20 -- # read -r var val 00:06:45.116 22:49:13 -- accel/accel.sh@21 -- # val= 00:06:45.116 22:49:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.116 22:49:13 -- accel/accel.sh@20 -- # IFS=: 00:06:45.116 22:49:13 -- accel/accel.sh@20 -- # read -r var val 00:06:45.116 22:49:13 -- accel/accel.sh@21 -- # val= 00:06:45.116 22:49:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.116 22:49:13 -- accel/accel.sh@20 -- # IFS=: 00:06:45.116 22:49:13 -- accel/accel.sh@20 -- # read -r var val 00:06:45.116 22:49:13 -- accel/accel.sh@21 -- # val= 00:06:45.116 22:49:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.116 22:49:13 -- accel/accel.sh@20 -- # IFS=: 00:06:45.116 22:49:13 -- accel/accel.sh@20 -- # read -r var val 00:06:45.116 22:49:13 -- accel/accel.sh@21 -- # val= 00:06:45.116 22:49:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.116 22:49:13 -- accel/accel.sh@20 -- # IFS=: 00:06:45.116 22:49:13 -- accel/accel.sh@20 -- # read -r var val 00:06:45.116 22:49:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:45.116 22:49:13 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:45.116 22:49:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.116 00:06:45.116 real 0m2.558s 00:06:45.116 user 0m2.375s 00:06:45.116 sys 0m0.191s 00:06:45.116 22:49:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.116 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:06:45.116 ************************************ 00:06:45.116 END TEST accel_dif_generate 00:06:45.116 ************************************ 00:06:45.376 22:49:13 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:45.376 22:49:13 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:45.376 22:49:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.376 22:49:13 -- common/autotest_common.sh@10 -- # set +x 00:06:45.376 ************************************ 00:06:45.376 START TEST accel_dif_generate_copy 00:06:45.376 ************************************ 00:06:45.376 22:49:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:45.376 22:49:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.376 22:49:13 -- accel/accel.sh@17 -- # local accel_module 00:06:45.376 22:49:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:45.376 22:49:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:45.376 22:49:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.376 22:49:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.376 22:49:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.376 22:49:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.376 22:49:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.376 22:49:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.376 22:49:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.376 22:49:13 -- accel/accel.sh@42 -- # jq -r . 00:06:45.376 [2024-06-09 22:49:13.344369] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:45.376 [2024-06-09 22:49:13.344452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899243 ] 00:06:45.376 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.376 [2024-06-09 22:49:13.404840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.376 [2024-06-09 22:49:13.467257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.758 22:49:14 -- accel/accel.sh@18 -- # out=' 00:06:46.758 SPDK Configuration: 00:06:46.758 Core mask: 0x1 00:06:46.758 00:06:46.758 Accel Perf Configuration: 00:06:46.758 Workload Type: dif_generate_copy 00:06:46.758 Vector size: 4096 bytes 00:06:46.758 Transfer size: 4096 bytes 00:06:46.758 Vector count 1 00:06:46.758 Module: software 00:06:46.758 Queue depth: 32 00:06:46.758 Allocate depth: 32 00:06:46.758 # threads/core: 1 00:06:46.758 Run time: 1 seconds 00:06:46.758 Verify: No 00:06:46.758 00:06:46.758 Running for 1 seconds... 00:06:46.758 00:06:46.758 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.758 ------------------------------------------------------------------------------------ 00:06:46.758 0,0 87744/s 348 MiB/s 0 0 00:06:46.758 ==================================================================================== 00:06:46.758 Total 87744/s 342 MiB/s 0 0' 00:06:46.758 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:46.759 22:49:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:46.759 22:49:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.759 22:49:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.759 22:49:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.759 22:49:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.759 22:49:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.759 22:49:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.759 22:49:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.759 22:49:14 -- accel/accel.sh@42 -- # jq -r . 00:06:46.759 [2024-06-09 22:49:14.618937] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:46.759 [2024-06-09 22:49:14.619013] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899462 ] 00:06:46.759 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.759 [2024-06-09 22:49:14.678939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.759 [2024-06-09 22:49:14.740678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val= 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val= 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val=0x1 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val= 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val= 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val= 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val=software 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val=32 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val=32 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val=1 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val=No 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val= 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:46.759 22:49:14 -- accel/accel.sh@21 -- # val= 00:06:46.759 22:49:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # IFS=: 00:06:46.759 22:49:14 -- accel/accel.sh@20 -- # read -r var val 00:06:47.700 22:49:15 -- accel/accel.sh@21 -- # val= 00:06:47.700 22:49:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.700 22:49:15 -- accel/accel.sh@20 -- # IFS=: 00:06:47.700 22:49:15 -- accel/accel.sh@20 -- # read -r var val 00:06:47.700 22:49:15 -- accel/accel.sh@21 -- # val= 00:06:47.700 22:49:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.700 22:49:15 -- accel/accel.sh@20 -- # IFS=: 00:06:47.700 22:49:15 -- accel/accel.sh@20 -- # read -r var val 00:06:47.700 22:49:15 -- accel/accel.sh@21 -- # val= 00:06:47.700 22:49:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.700 22:49:15 -- accel/accel.sh@20 -- # IFS=: 00:06:47.700 22:49:15 -- accel/accel.sh@20 -- # read -r var val 00:06:47.700 22:49:15 -- accel/accel.sh@21 -- # val= 00:06:47.700 22:49:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.700 22:49:15 -- accel/accel.sh@20 -- # IFS=: 00:06:47.700 22:49:15 -- accel/accel.sh@20 -- # read -r var val 00:06:47.700 22:49:15 -- accel/accel.sh@21 -- # val= 00:06:47.700 22:49:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.700 22:49:15 -- accel/accel.sh@20 -- # IFS=: 00:06:47.700 22:49:15 -- accel/accel.sh@20 -- # read -r var val 00:06:47.700 22:49:15 -- accel/accel.sh@21 -- # val= 00:06:47.700 22:49:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.700 22:49:15 -- accel/accel.sh@20 -- # IFS=: 00:06:47.700 22:49:15 -- accel/accel.sh@20 -- # read -r var val 00:06:47.700 22:49:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.700 22:49:15 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:47.700 22:49:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.700 00:06:47.700 real 0m2.552s 00:06:47.700 user 0m2.363s 00:06:47.700 sys 0m0.195s 00:06:47.700 22:49:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.700 22:49:15 -- common/autotest_common.sh@10 -- # set +x 00:06:47.700 ************************************ 00:06:47.700 END TEST accel_dif_generate_copy 00:06:47.700 ************************************ 00:06:47.961 22:49:15 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:47.961 22:49:15 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.961 22:49:15 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:47.961 22:49:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.961 22:49:15 -- common/autotest_common.sh@10 -- # set +x 00:06:47.961 ************************************ 00:06:47.961 START TEST accel_comp 00:06:47.961 ************************************ 00:06:47.961 22:49:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.961 22:49:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.961 22:49:15 -- accel/accel.sh@17 -- # local accel_module 00:06:47.961 22:49:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.961 22:49:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:47.961 22:49:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.961 22:49:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.961 22:49:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.961 22:49:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.961 22:49:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.961 22:49:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.961 22:49:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.961 22:49:15 -- accel/accel.sh@42 -- # jq -r . 00:06:47.961 [2024-06-09 22:49:15.942221] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:47.961 [2024-06-09 22:49:15.942321] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899817 ] 00:06:47.961 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.961 [2024-06-09 22:49:16.003817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.961 [2024-06-09 22:49:16.067048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.346 22:49:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:49.346 00:06:49.346 SPDK Configuration: 00:06:49.346 Core mask: 0x1 00:06:49.346 00:06:49.346 Accel Perf Configuration: 00:06:49.346 Workload Type: compress 00:06:49.346 Transfer size: 4096 bytes 00:06:49.346 Vector count 1 00:06:49.346 Module: software 00:06:49.346 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.346 Queue depth: 32 00:06:49.346 Allocate depth: 32 00:06:49.346 # threads/core: 1 00:06:49.346 Run time: 1 seconds 00:06:49.346 Verify: No 00:06:49.346 00:06:49.346 Running for 1 seconds... 00:06:49.346 00:06:49.346 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:49.346 ------------------------------------------------------------------------------------ 00:06:49.346 0,0 47040/s 196 MiB/s 0 0 00:06:49.346 ==================================================================================== 00:06:49.346 Total 47040/s 183 MiB/s 0 0' 00:06:49.346 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.346 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.347 22:49:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.347 22:49:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.347 22:49:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.347 22:49:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.347 22:49:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.347 22:49:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.347 22:49:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.347 22:49:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.347 22:49:17 -- accel/accel.sh@42 -- # jq -r . 00:06:49.347 [2024-06-09 22:49:17.222551] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:49.347 [2024-06-09 22:49:17.222651] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900153 ] 00:06:49.347 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.347 [2024-06-09 22:49:17.283062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.347 [2024-06-09 22:49:17.344568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val= 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val= 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val= 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val=0x1 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val= 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val= 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val=compress 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val= 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val=software 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val=32 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val=32 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val=1 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val=No 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val= 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:49.347 22:49:17 -- accel/accel.sh@21 -- # val= 00:06:49.347 22:49:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # IFS=: 00:06:49.347 22:49:17 -- accel/accel.sh@20 -- # read -r var val 00:06:50.735 22:49:18 -- accel/accel.sh@21 -- # val= 00:06:50.735 22:49:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.735 22:49:18 -- accel/accel.sh@20 -- # IFS=: 00:06:50.735 22:49:18 -- accel/accel.sh@20 -- # read -r var val 00:06:50.735 22:49:18 -- accel/accel.sh@21 -- # val= 00:06:50.735 22:49:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.735 22:49:18 -- accel/accel.sh@20 -- # IFS=: 00:06:50.735 22:49:18 -- accel/accel.sh@20 -- # read -r var val 00:06:50.735 22:49:18 -- accel/accel.sh@21 -- # val= 00:06:50.735 22:49:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.736 22:49:18 -- accel/accel.sh@20 -- # IFS=: 00:06:50.736 22:49:18 -- accel/accel.sh@20 -- # read -r var val 00:06:50.736 22:49:18 -- accel/accel.sh@21 -- # val= 00:06:50.736 22:49:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.736 22:49:18 -- accel/accel.sh@20 -- # IFS=: 00:06:50.736 22:49:18 -- accel/accel.sh@20 -- # read -r var val 00:06:50.736 22:49:18 -- accel/accel.sh@21 -- # val= 00:06:50.736 22:49:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.736 22:49:18 -- accel/accel.sh@20 -- # IFS=: 00:06:50.736 22:49:18 -- accel/accel.sh@20 -- # read -r var val 00:06:50.736 22:49:18 -- accel/accel.sh@21 -- # val= 00:06:50.736 22:49:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.736 22:49:18 -- accel/accel.sh@20 -- # IFS=: 00:06:50.736 22:49:18 -- accel/accel.sh@20 -- # read -r var val 00:06:50.736 22:49:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.736 22:49:18 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:50.736 22:49:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.736 00:06:50.736 real 0m2.564s 00:06:50.736 user 0m2.372s 00:06:50.736 sys 0m0.199s 00:06:50.736 22:49:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.736 22:49:18 -- common/autotest_common.sh@10 -- # set +x 00:06:50.736 ************************************ 00:06:50.736 END TEST accel_comp 00:06:50.736 ************************************ 00:06:50.736 22:49:18 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.736 22:49:18 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:50.736 22:49:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.736 22:49:18 -- common/autotest_common.sh@10 -- # set +x 00:06:50.736 ************************************ 00:06:50.736 START TEST accel_decomp 00:06:50.736 ************************************ 00:06:50.736 22:49:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.736 22:49:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.736 22:49:18 -- accel/accel.sh@17 -- # local accel_module 00:06:50.736 22:49:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.736 22:49:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.736 22:49:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.736 22:49:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.736 22:49:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.736 22:49:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.736 22:49:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.736 22:49:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.736 22:49:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.736 22:49:18 -- accel/accel.sh@42 -- # jq -r . 00:06:50.736 [2024-06-09 22:49:18.547719] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:50.736 [2024-06-09 22:49:18.547818] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900383 ] 00:06:50.736 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.736 [2024-06-09 22:49:18.623161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.736 [2024-06-09 22:49:18.689550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.681 22:49:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:51.681 00:06:51.681 SPDK Configuration: 00:06:51.681 Core mask: 0x1 00:06:51.681 00:06:51.681 Accel Perf Configuration: 00:06:51.681 Workload Type: decompress 00:06:51.681 Transfer size: 4096 bytes 00:06:51.681 Vector count 1 00:06:51.681 Module: software 00:06:51.681 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:51.681 Queue depth: 32 00:06:51.681 Allocate depth: 32 00:06:51.681 # threads/core: 1 00:06:51.681 Run time: 1 seconds 00:06:51.681 Verify: Yes 00:06:51.681 00:06:51.681 Running for 1 seconds... 00:06:51.681 00:06:51.681 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.681 ------------------------------------------------------------------------------------ 00:06:51.681 0,0 63296/s 116 MiB/s 0 0 00:06:51.681 ==================================================================================== 00:06:51.681 Total 63296/s 247 MiB/s 0 0' 00:06:51.681 22:49:19 -- accel/accel.sh@20 -- # IFS=: 00:06:51.681 22:49:19 -- accel/accel.sh@20 -- # read -r var val 00:06:51.681 22:49:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:51.681 22:49:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:51.681 22:49:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.681 22:49:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.681 22:49:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.681 22:49:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.681 22:49:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.681 22:49:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.681 22:49:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.681 22:49:19 -- accel/accel.sh@42 -- # jq -r . 00:06:51.681 [2024-06-09 22:49:19.844056] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:51.681 [2024-06-09 22:49:19.844126] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900547 ] 00:06:51.943 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.943 [2024-06-09 22:49:19.903952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.943 [2024-06-09 22:49:19.966607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.943 22:49:19 -- accel/accel.sh@21 -- # val= 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val= 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val= 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val=0x1 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val= 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val= 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val=decompress 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val= 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val=software 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val=32 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val=32 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val=1 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val=Yes 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val= 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:51.943 22:49:20 -- accel/accel.sh@21 -- # val= 00:06:51.943 22:49:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # IFS=: 00:06:51.943 22:49:20 -- accel/accel.sh@20 -- # read -r var val 00:06:53.331 22:49:21 -- accel/accel.sh@21 -- # val= 00:06:53.331 22:49:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.331 22:49:21 -- accel/accel.sh@20 -- # IFS=: 00:06:53.331 22:49:21 -- accel/accel.sh@20 -- # read -r var val 00:06:53.331 22:49:21 -- accel/accel.sh@21 -- # val= 00:06:53.331 22:49:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.331 22:49:21 -- accel/accel.sh@20 -- # IFS=: 00:06:53.331 22:49:21 -- accel/accel.sh@20 -- # read -r var val 00:06:53.331 22:49:21 -- accel/accel.sh@21 -- # val= 00:06:53.331 22:49:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.331 22:49:21 -- accel/accel.sh@20 -- # IFS=: 00:06:53.331 22:49:21 -- accel/accel.sh@20 -- # read -r var val 00:06:53.331 22:49:21 -- accel/accel.sh@21 -- # val= 00:06:53.331 22:49:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.331 22:49:21 -- accel/accel.sh@20 -- # IFS=: 00:06:53.331 22:49:21 -- accel/accel.sh@20 -- # read -r var val 00:06:53.331 22:49:21 -- accel/accel.sh@21 -- # val= 00:06:53.331 22:49:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.331 22:49:21 -- accel/accel.sh@20 -- # IFS=: 00:06:53.331 22:49:21 -- accel/accel.sh@20 -- # read -r var val 00:06:53.331 22:49:21 -- accel/accel.sh@21 -- # val= 00:06:53.331 22:49:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.331 22:49:21 -- accel/accel.sh@20 -- # IFS=: 00:06:53.331 22:49:21 -- accel/accel.sh@20 -- # read -r var val 00:06:53.331 22:49:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.331 22:49:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:53.331 22:49:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.331 00:06:53.331 real 0m2.580s 00:06:53.331 user 0m2.374s 00:06:53.331 sys 0m0.213s 00:06:53.331 22:49:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.331 22:49:21 -- common/autotest_common.sh@10 -- # set +x 00:06:53.331 ************************************ 00:06:53.331 END TEST accel_decomp 00:06:53.331 ************************************ 00:06:53.331 22:49:21 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:53.331 22:49:21 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:53.331 22:49:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.331 22:49:21 -- common/autotest_common.sh@10 -- # set +x 00:06:53.331 ************************************ 00:06:53.331 START TEST accel_decmop_full 00:06:53.331 ************************************ 00:06:53.331 22:49:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:53.331 22:49:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.331 22:49:21 -- accel/accel.sh@17 -- # local accel_module 00:06:53.331 22:49:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:53.331 22:49:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:53.331 22:49:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.331 22:49:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.331 22:49:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.331 22:49:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.331 22:49:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.331 22:49:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.331 22:49:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.331 22:49:21 -- accel/accel.sh@42 -- # jq -r . 00:06:53.331 [2024-06-09 22:49:21.170151] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:53.331 [2024-06-09 22:49:21.170240] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900878 ] 00:06:53.331 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.331 [2024-06-09 22:49:21.240947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.331 [2024-06-09 22:49:21.307526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.275 22:49:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:54.275 00:06:54.275 SPDK Configuration: 00:06:54.275 Core mask: 0x1 00:06:54.275 00:06:54.275 Accel Perf Configuration: 00:06:54.275 Workload Type: decompress 00:06:54.275 Transfer size: 111250 bytes 00:06:54.275 Vector count 1 00:06:54.275 Module: software 00:06:54.275 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.275 Queue depth: 32 00:06:54.275 Allocate depth: 32 00:06:54.275 # threads/core: 1 00:06:54.275 Run time: 1 seconds 00:06:54.275 Verify: Yes 00:06:54.275 00:06:54.275 Running for 1 seconds... 00:06:54.275 00:06:54.275 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.275 ------------------------------------------------------------------------------------ 00:06:54.275 0,0 4064/s 167 MiB/s 0 0 00:06:54.275 ==================================================================================== 00:06:54.275 Total 4064/s 431 MiB/s 0 0' 00:06:54.275 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.275 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.275 22:49:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:54.275 22:49:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:54.275 22:49:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.275 22:49:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.275 22:49:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.275 22:49:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.275 22:49:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.275 22:49:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.275 22:49:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.275 22:49:22 -- accel/accel.sh@42 -- # jq -r . 00:06:54.537 [2024-06-09 22:49:22.471628] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:54.537 [2024-06-09 22:49:22.471728] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901212 ] 00:06:54.537 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.537 [2024-06-09 22:49:22.532672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.537 [2024-06-09 22:49:22.595011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val= 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val= 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val= 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val=0x1 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val= 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val= 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val=decompress 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val= 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val=software 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val=32 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val=32 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val=1 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val=Yes 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val= 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:54.537 22:49:22 -- accel/accel.sh@21 -- # val= 00:06:54.537 22:49:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # IFS=: 00:06:54.537 22:49:22 -- accel/accel.sh@20 -- # read -r var val 00:06:55.930 22:49:23 -- accel/accel.sh@21 -- # val= 00:06:55.930 22:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.930 22:49:23 -- accel/accel.sh@20 -- # IFS=: 00:06:55.930 22:49:23 -- accel/accel.sh@20 -- # read -r var val 00:06:55.930 22:49:23 -- accel/accel.sh@21 -- # val= 00:06:55.930 22:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.930 22:49:23 -- accel/accel.sh@20 -- # IFS=: 00:06:55.930 22:49:23 -- accel/accel.sh@20 -- # read -r var val 00:06:55.930 22:49:23 -- accel/accel.sh@21 -- # val= 00:06:55.930 22:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.930 22:49:23 -- accel/accel.sh@20 -- # IFS=: 00:06:55.930 22:49:23 -- accel/accel.sh@20 -- # read -r var val 00:06:55.930 22:49:23 -- accel/accel.sh@21 -- # val= 00:06:55.930 22:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.931 22:49:23 -- accel/accel.sh@20 -- # IFS=: 00:06:55.931 22:49:23 -- accel/accel.sh@20 -- # read -r var val 00:06:55.931 22:49:23 -- accel/accel.sh@21 -- # val= 00:06:55.931 22:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.931 22:49:23 -- accel/accel.sh@20 -- # IFS=: 00:06:55.931 22:49:23 -- accel/accel.sh@20 -- # read -r var val 00:06:55.931 22:49:23 -- accel/accel.sh@21 -- # val= 00:06:55.931 22:49:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.931 22:49:23 -- accel/accel.sh@20 -- # IFS=: 00:06:55.931 22:49:23 -- accel/accel.sh@20 -- # read -r var val 00:06:55.931 22:49:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.931 22:49:23 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:55.931 22:49:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.931 00:06:55.931 real 0m2.593s 00:06:55.931 user 0m2.391s 00:06:55.931 sys 0m0.208s 00:06:55.931 22:49:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.931 22:49:23 -- common/autotest_common.sh@10 -- # set +x 00:06:55.931 ************************************ 00:06:55.931 END TEST accel_decmop_full 00:06:55.931 ************************************ 00:06:55.931 22:49:23 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:55.931 22:49:23 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:55.931 22:49:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.931 22:49:23 -- common/autotest_common.sh@10 -- # set +x 00:06:55.931 ************************************ 00:06:55.931 START TEST accel_decomp_mcore 00:06:55.931 ************************************ 00:06:55.931 22:49:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:55.931 22:49:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.931 22:49:23 -- accel/accel.sh@17 -- # local accel_module 00:06:55.931 22:49:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:55.931 22:49:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:55.931 22:49:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.931 22:49:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.931 22:49:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.931 22:49:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.931 22:49:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.931 22:49:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.931 22:49:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.931 22:49:23 -- accel/accel.sh@42 -- # jq -r . 00:06:55.931 [2024-06-09 22:49:23.805994] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:55.931 [2024-06-09 22:49:23.806070] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901562 ] 00:06:55.931 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.931 [2024-06-09 22:49:23.867033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.931 [2024-06-09 22:49:23.934120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.931 [2024-06-09 22:49:23.934261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.931 [2024-06-09 22:49:23.934468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.931 [2024-06-09 22:49:23.934608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.318 22:49:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:57.318 00:06:57.318 SPDK Configuration: 00:06:57.318 Core mask: 0xf 00:06:57.318 00:06:57.318 Accel Perf Configuration: 00:06:57.318 Workload Type: decompress 00:06:57.318 Transfer size: 4096 bytes 00:06:57.318 Vector count 1 00:06:57.318 Module: software 00:06:57.318 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.318 Queue depth: 32 00:06:57.318 Allocate depth: 32 00:06:57.318 # threads/core: 1 00:06:57.318 Run time: 1 seconds 00:06:57.318 Verify: Yes 00:06:57.318 00:06:57.318 Running for 1 seconds... 00:06:57.318 00:06:57.318 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.318 ------------------------------------------------------------------------------------ 00:06:57.318 0,0 58336/s 107 MiB/s 0 0 00:06:57.319 3,0 58560/s 107 MiB/s 0 0 00:06:57.319 2,0 86304/s 159 MiB/s 0 0 00:06:57.319 1,0 58400/s 107 MiB/s 0 0 00:06:57.319 ==================================================================================== 00:06:57.319 Total 261600/s 1021 MiB/s 0 0' 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:57.319 22:49:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:57.319 22:49:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.319 22:49:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.319 22:49:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.319 22:49:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.319 22:49:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.319 22:49:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.319 22:49:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.319 22:49:25 -- accel/accel.sh@42 -- # jq -r . 00:06:57.319 [2024-06-09 22:49:25.093935] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:57.319 [2024-06-09 22:49:25.094010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901709 ] 00:06:57.319 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.319 [2024-06-09 22:49:25.154089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.319 [2024-06-09 22:49:25.219554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.319 [2024-06-09 22:49:25.219673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.319 [2024-06-09 22:49:25.219815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.319 [2024-06-09 22:49:25.219816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val= 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val= 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val= 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val=0xf 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val= 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val= 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val=decompress 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val= 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val=software 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val=32 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val=32 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val=1 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val=Yes 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val= 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:57.319 22:49:25 -- accel/accel.sh@21 -- # val= 00:06:57.319 22:49:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # IFS=: 00:06:57.319 22:49:25 -- accel/accel.sh@20 -- # read -r var val 00:06:58.267 22:49:26 -- accel/accel.sh@21 -- # val= 00:06:58.267 22:49:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # IFS=: 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # read -r var val 00:06:58.267 22:49:26 -- accel/accel.sh@21 -- # val= 00:06:58.267 22:49:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # IFS=: 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # read -r var val 00:06:58.267 22:49:26 -- accel/accel.sh@21 -- # val= 00:06:58.267 22:49:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # IFS=: 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # read -r var val 00:06:58.267 22:49:26 -- accel/accel.sh@21 -- # val= 00:06:58.267 22:49:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # IFS=: 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # read -r var val 00:06:58.267 22:49:26 -- accel/accel.sh@21 -- # val= 00:06:58.267 22:49:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # IFS=: 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # read -r var val 00:06:58.267 22:49:26 -- accel/accel.sh@21 -- # val= 00:06:58.267 22:49:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # IFS=: 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # read -r var val 00:06:58.267 22:49:26 -- accel/accel.sh@21 -- # val= 00:06:58.267 22:49:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # IFS=: 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # read -r var val 00:06:58.267 22:49:26 -- accel/accel.sh@21 -- # val= 00:06:58.267 22:49:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # IFS=: 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # read -r var val 00:06:58.267 22:49:26 -- accel/accel.sh@21 -- # val= 00:06:58.267 22:49:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # IFS=: 00:06:58.267 22:49:26 -- accel/accel.sh@20 -- # read -r var val 00:06:58.267 22:49:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.267 22:49:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:58.267 22:49:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.267 00:06:58.267 real 0m2.578s 00:06:58.267 user 0m8.851s 00:06:58.267 sys 0m0.201s 00:06:58.267 22:49:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.267 22:49:26 -- common/autotest_common.sh@10 -- # set +x 00:06:58.267 ************************************ 00:06:58.267 END TEST accel_decomp_mcore 00:06:58.267 ************************************ 00:06:58.267 22:49:26 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.267 22:49:26 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:58.267 22:49:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.267 22:49:26 -- common/autotest_common.sh@10 -- # set +x 00:06:58.267 ************************************ 00:06:58.267 START TEST accel_decomp_full_mcore 00:06:58.267 ************************************ 00:06:58.267 22:49:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.267 22:49:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.267 22:49:26 -- accel/accel.sh@17 -- # local accel_module 00:06:58.267 22:49:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.267 22:49:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.267 22:49:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.267 22:49:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.267 22:49:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.267 22:49:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.267 22:49:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.267 22:49:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.267 22:49:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.267 22:49:26 -- accel/accel.sh@42 -- # jq -r . 00:06:58.267 [2024-06-09 22:49:26.431062] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:58.267 [2024-06-09 22:49:26.431172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901946 ] 00:06:58.566 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.566 [2024-06-09 22:49:26.503906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.566 [2024-06-09 22:49:26.572411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.566 [2024-06-09 22:49:26.572543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.566 [2024-06-09 22:49:26.572809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.566 [2024-06-09 22:49:26.572810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.954 22:49:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:59.954 00:06:59.954 SPDK Configuration: 00:06:59.954 Core mask: 0xf 00:06:59.954 00:06:59.954 Accel Perf Configuration: 00:06:59.954 Workload Type: decompress 00:06:59.954 Transfer size: 111250 bytes 00:06:59.954 Vector count 1 00:06:59.954 Module: software 00:06:59.954 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.954 Queue depth: 32 00:06:59.954 Allocate depth: 32 00:06:59.954 # threads/core: 1 00:06:59.954 Run time: 1 seconds 00:06:59.954 Verify: Yes 00:06:59.954 00:06:59.954 Running for 1 seconds... 00:06:59.954 00:06:59.954 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.954 ------------------------------------------------------------------------------------ 00:06:59.954 0,0 4064/s 167 MiB/s 0 0 00:06:59.954 3,0 4096/s 169 MiB/s 0 0 00:06:59.954 2,0 5920/s 244 MiB/s 0 0 00:06:59.954 1,0 4096/s 169 MiB/s 0 0 00:06:59.954 ==================================================================================== 00:06:59.954 Total 18176/s 1928 MiB/s 0 0' 00:06:59.954 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.954 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.954 22:49:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:59.954 22:49:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:59.954 22:49:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.954 22:49:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.954 22:49:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.954 22:49:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.955 22:49:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.955 22:49:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.955 22:49:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.955 22:49:27 -- accel/accel.sh@42 -- # jq -r . 00:06:59.955 [2024-06-09 22:49:27.747083] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:06:59.955 [2024-06-09 22:49:27.747211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902281 ] 00:06:59.955 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.955 [2024-06-09 22:49:27.816017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.955 [2024-06-09 22:49:27.880535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.955 [2024-06-09 22:49:27.880639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.955 [2024-06-09 22:49:27.880780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.955 [2024-06-09 22:49:27.880781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val= 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val= 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val= 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val=0xf 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val= 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val= 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val=decompress 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val= 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val=software 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val=32 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val=32 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val=1 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val=Yes 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val= 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:06:59.955 22:49:27 -- accel/accel.sh@21 -- # val= 00:06:59.955 22:49:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # IFS=: 00:06:59.955 22:49:27 -- accel/accel.sh@20 -- # read -r var val 00:07:00.897 22:49:29 -- accel/accel.sh@21 -- # val= 00:07:00.897 22:49:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # IFS=: 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # read -r var val 00:07:00.897 22:49:29 -- accel/accel.sh@21 -- # val= 00:07:00.897 22:49:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # IFS=: 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # read -r var val 00:07:00.897 22:49:29 -- accel/accel.sh@21 -- # val= 00:07:00.897 22:49:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # IFS=: 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # read -r var val 00:07:00.897 22:49:29 -- accel/accel.sh@21 -- # val= 00:07:00.897 22:49:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # IFS=: 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # read -r var val 00:07:00.897 22:49:29 -- accel/accel.sh@21 -- # val= 00:07:00.897 22:49:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # IFS=: 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # read -r var val 00:07:00.897 22:49:29 -- accel/accel.sh@21 -- # val= 00:07:00.897 22:49:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # IFS=: 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # read -r var val 00:07:00.897 22:49:29 -- accel/accel.sh@21 -- # val= 00:07:00.897 22:49:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # IFS=: 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # read -r var val 00:07:00.897 22:49:29 -- accel/accel.sh@21 -- # val= 00:07:00.897 22:49:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # IFS=: 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # read -r var val 00:07:00.897 22:49:29 -- accel/accel.sh@21 -- # val= 00:07:00.897 22:49:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # IFS=: 00:07:00.897 22:49:29 -- accel/accel.sh@20 -- # read -r var val 00:07:00.897 22:49:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.897 22:49:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:00.897 22:49:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.897 00:07:00.897 real 0m2.632s 00:07:00.897 user 0m8.968s 00:07:00.897 sys 0m0.216s 00:07:00.897 22:49:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.897 22:49:29 -- common/autotest_common.sh@10 -- # set +x 00:07:00.897 ************************************ 00:07:00.897 END TEST accel_decomp_full_mcore 00:07:00.897 ************************************ 00:07:00.897 22:49:29 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:00.897 22:49:29 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:00.897 22:49:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.897 22:49:29 -- common/autotest_common.sh@10 -- # set +x 00:07:01.158 ************************************ 00:07:01.158 START TEST accel_decomp_mthread 00:07:01.158 ************************************ 00:07:01.158 22:49:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:01.158 22:49:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.158 22:49:29 -- accel/accel.sh@17 -- # local accel_module 00:07:01.158 22:49:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:01.158 22:49:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:01.159 22:49:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.159 22:49:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.159 22:49:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.159 22:49:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.159 22:49:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.159 22:49:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.159 22:49:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.159 22:49:29 -- accel/accel.sh@42 -- # jq -r . 00:07:01.159 [2024-06-09 22:49:29.105793] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:01.159 [2024-06-09 22:49:29.105875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902639 ] 00:07:01.159 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.159 [2024-06-09 22:49:29.166437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.159 [2024-06-09 22:49:29.229423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.545 22:49:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:02.545 00:07:02.545 SPDK Configuration: 00:07:02.545 Core mask: 0x1 00:07:02.545 00:07:02.545 Accel Perf Configuration: 00:07:02.545 Workload Type: decompress 00:07:02.545 Transfer size: 4096 bytes 00:07:02.545 Vector count 1 00:07:02.545 Module: software 00:07:02.545 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.545 Queue depth: 32 00:07:02.545 Allocate depth: 32 00:07:02.545 # threads/core: 2 00:07:02.545 Run time: 1 seconds 00:07:02.545 Verify: Yes 00:07:02.545 00:07:02.545 Running for 1 seconds... 00:07:02.545 00:07:02.545 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.545 ------------------------------------------------------------------------------------ 00:07:02.545 0,1 31904/s 58 MiB/s 0 0 00:07:02.545 0,0 31776/s 58 MiB/s 0 0 00:07:02.545 ==================================================================================== 00:07:02.546 Total 63680/s 248 MiB/s 0 0' 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:02.546 22:49:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:02.546 22:49:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.546 22:49:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.546 22:49:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.546 22:49:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.546 22:49:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.546 22:49:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.546 22:49:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.546 22:49:30 -- accel/accel.sh@42 -- # jq -r . 00:07:02.546 [2024-06-09 22:49:30.387323] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:02.546 [2024-06-09 22:49:30.387396] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3902887 ] 00:07:02.546 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.546 [2024-06-09 22:49:30.446961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.546 [2024-06-09 22:49:30.513618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val= 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val= 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val= 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val=0x1 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val= 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val= 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val=decompress 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val= 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val=software 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val=32 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val=32 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val=2 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val=Yes 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val= 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:02.546 22:49:30 -- accel/accel.sh@21 -- # val= 00:07:02.546 22:49:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # IFS=: 00:07:02.546 22:49:30 -- accel/accel.sh@20 -- # read -r var val 00:07:03.492 22:49:31 -- accel/accel.sh@21 -- # val= 00:07:03.492 22:49:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.492 22:49:31 -- accel/accel.sh@20 -- # IFS=: 00:07:03.492 22:49:31 -- accel/accel.sh@20 -- # read -r var val 00:07:03.492 22:49:31 -- accel/accel.sh@21 -- # val= 00:07:03.492 22:49:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.492 22:49:31 -- accel/accel.sh@20 -- # IFS=: 00:07:03.492 22:49:31 -- accel/accel.sh@20 -- # read -r var val 00:07:03.492 22:49:31 -- accel/accel.sh@21 -- # val= 00:07:03.492 22:49:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.492 22:49:31 -- accel/accel.sh@20 -- # IFS=: 00:07:03.492 22:49:31 -- accel/accel.sh@20 -- # read -r var val 00:07:03.492 22:49:31 -- accel/accel.sh@21 -- # val= 00:07:03.492 22:49:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.492 22:49:31 -- accel/accel.sh@20 -- # IFS=: 00:07:03.492 22:49:31 -- accel/accel.sh@20 -- # read -r var val 00:07:03.492 22:49:31 -- accel/accel.sh@21 -- # val= 00:07:03.492 22:49:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.492 22:49:31 -- accel/accel.sh@20 -- # IFS=: 00:07:03.492 22:49:31 -- accel/accel.sh@20 -- # read -r var val 00:07:03.492 22:49:31 -- accel/accel.sh@21 -- # val= 00:07:03.492 22:49:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.492 22:49:31 -- accel/accel.sh@20 -- # IFS=: 00:07:03.492 22:49:31 -- accel/accel.sh@20 -- # read -r var val 00:07:03.492 22:49:31 -- accel/accel.sh@21 -- # val= 00:07:03.492 22:49:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.492 22:49:31 -- accel/accel.sh@20 -- # IFS=: 00:07:03.492 22:49:31 -- accel/accel.sh@20 -- # read -r var val 00:07:03.492 22:49:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.492 22:49:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:03.492 22:49:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.492 00:07:03.492 real 0m2.573s 00:07:03.492 user 0m2.384s 00:07:03.492 sys 0m0.197s 00:07:03.492 22:49:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.492 22:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:03.492 ************************************ 00:07:03.492 END TEST accel_decomp_mthread 00:07:03.492 ************************************ 00:07:03.754 22:49:31 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.754 22:49:31 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:03.754 22:49:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.754 22:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:03.754 ************************************ 00:07:03.754 START TEST accel_deomp_full_mthread 00:07:03.754 ************************************ 00:07:03.754 22:49:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.754 22:49:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.754 22:49:31 -- accel/accel.sh@17 -- # local accel_module 00:07:03.754 22:49:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.754 22:49:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.754 22:49:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.754 22:49:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.754 22:49:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.754 22:49:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.754 22:49:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.754 22:49:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.754 22:49:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.754 22:49:31 -- accel/accel.sh@42 -- # jq -r . 00:07:03.754 [2024-06-09 22:49:31.720940] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:03.754 [2024-06-09 22:49:31.721012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903064 ] 00:07:03.754 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.754 [2024-06-09 22:49:31.782026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.754 [2024-06-09 22:49:31.847570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.141 22:49:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:05.141 00:07:05.141 SPDK Configuration: 00:07:05.141 Core mask: 0x1 00:07:05.141 00:07:05.141 Accel Perf Configuration: 00:07:05.141 Workload Type: decompress 00:07:05.141 Transfer size: 111250 bytes 00:07:05.141 Vector count 1 00:07:05.141 Module: software 00:07:05.141 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.141 Queue depth: 32 00:07:05.141 Allocate depth: 32 00:07:05.141 # threads/core: 2 00:07:05.141 Run time: 1 seconds 00:07:05.141 Verify: Yes 00:07:05.141 00:07:05.141 Running for 1 seconds... 00:07:05.141 00:07:05.141 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.141 ------------------------------------------------------------------------------------ 00:07:05.141 0,1 2112/s 87 MiB/s 0 0 00:07:05.141 0,0 2048/s 84 MiB/s 0 0 00:07:05.141 ==================================================================================== 00:07:05.141 Total 4160/s 441 MiB/s 0 0' 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.141 22:49:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.141 22:49:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.141 22:49:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.141 22:49:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.141 22:49:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.141 22:49:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.141 22:49:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.141 22:49:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.141 22:49:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.141 22:49:33 -- accel/accel.sh@42 -- # jq -r . 00:07:05.141 [2024-06-09 22:49:33.036139] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:05.141 [2024-06-09 22:49:33.036244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903348 ] 00:07:05.141 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.141 [2024-06-09 22:49:33.101408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.141 [2024-06-09 22:49:33.164142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.141 22:49:33 -- accel/accel.sh@21 -- # val= 00:07:05.141 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.141 22:49:33 -- accel/accel.sh@21 -- # val= 00:07:05.141 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.141 22:49:33 -- accel/accel.sh@21 -- # val= 00:07:05.141 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.141 22:49:33 -- accel/accel.sh@21 -- # val=0x1 00:07:05.141 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.141 22:49:33 -- accel/accel.sh@21 -- # val= 00:07:05.141 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.141 22:49:33 -- accel/accel.sh@21 -- # val= 00:07:05.141 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.141 22:49:33 -- accel/accel.sh@21 -- # val=decompress 00:07:05.141 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.141 22:49:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.141 22:49:33 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:05.141 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.141 22:49:33 -- accel/accel.sh@21 -- # val= 00:07:05.141 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.141 22:49:33 -- accel/accel.sh@21 -- # val=software 00:07:05.141 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.141 22:49:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.141 22:49:33 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.141 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.141 22:49:33 -- accel/accel.sh@21 -- # val=32 00:07:05.141 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.141 22:49:33 -- accel/accel.sh@21 -- # val=32 00:07:05.141 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.141 22:49:33 -- accel/accel.sh@21 -- # val=2 00:07:05.141 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.141 22:49:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.141 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.141 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.142 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.142 22:49:33 -- accel/accel.sh@21 -- # val=Yes 00:07:05.142 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.142 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.142 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.142 22:49:33 -- accel/accel.sh@21 -- # val= 00:07:05.142 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.142 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.142 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:05.142 22:49:33 -- accel/accel.sh@21 -- # val= 00:07:05.142 22:49:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.142 22:49:33 -- accel/accel.sh@20 -- # IFS=: 00:07:05.142 22:49:33 -- accel/accel.sh@20 -- # read -r var val 00:07:06.530 22:49:34 -- accel/accel.sh@21 -- # val= 00:07:06.530 22:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.530 22:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:06.530 22:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:06.530 22:49:34 -- accel/accel.sh@21 -- # val= 00:07:06.530 22:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.530 22:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:06.530 22:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:06.530 22:49:34 -- accel/accel.sh@21 -- # val= 00:07:06.530 22:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.530 22:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:06.530 22:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:06.530 22:49:34 -- accel/accel.sh@21 -- # val= 00:07:06.530 22:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.530 22:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:06.530 22:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:06.530 22:49:34 -- accel/accel.sh@21 -- # val= 00:07:06.530 22:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.530 22:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:06.530 22:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:06.530 22:49:34 -- accel/accel.sh@21 -- # val= 00:07:06.530 22:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.530 22:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:06.530 22:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:06.530 22:49:34 -- accel/accel.sh@21 -- # val= 00:07:06.530 22:49:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.530 22:49:34 -- accel/accel.sh@20 -- # IFS=: 00:07:06.530 22:49:34 -- accel/accel.sh@20 -- # read -r var val 00:07:06.530 22:49:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.530 22:49:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:06.530 22:49:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.530 00:07:06.530 real 0m2.632s 00:07:06.530 user 0m2.434s 00:07:06.530 sys 0m0.206s 00:07:06.530 22:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.530 22:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:06.530 ************************************ 00:07:06.530 END TEST accel_deomp_full_mthread 00:07:06.530 ************************************ 00:07:06.530 22:49:34 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:06.530 22:49:34 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:06.530 22:49:34 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:06.530 22:49:34 -- accel/accel.sh@129 -- # build_accel_config 00:07:06.530 22:49:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.530 22:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:06.530 22:49:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.530 22:49:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.530 22:49:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.530 22:49:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.530 22:49:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.530 22:49:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.530 22:49:34 -- accel/accel.sh@42 -- # jq -r . 00:07:06.530 ************************************ 00:07:06.530 START TEST accel_dif_functional_tests 00:07:06.530 ************************************ 00:07:06.530 22:49:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:06.530 [2024-06-09 22:49:34.419214] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:06.530 [2024-06-09 22:49:34.419272] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903704 ] 00:07:06.530 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.530 [2024-06-09 22:49:34.477087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.530 [2024-06-09 22:49:34.541524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.530 [2024-06-09 22:49:34.541700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.530 [2024-06-09 22:49:34.541704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.530 00:07:06.530 00:07:06.530 CUnit - A unit testing framework for C - Version 2.1-3 00:07:06.530 http://cunit.sourceforge.net/ 00:07:06.530 00:07:06.530 00:07:06.530 Suite: accel_dif 00:07:06.530 Test: verify: DIF generated, GUARD check ...passed 00:07:06.530 Test: verify: DIF generated, APPTAG check ...passed 00:07:06.530 Test: verify: DIF generated, REFTAG check ...passed 00:07:06.530 Test: verify: DIF not generated, GUARD check ...[2024-06-09 22:49:34.596739] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:06.530 [2024-06-09 22:49:34.596777] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:06.530 passed 00:07:06.530 Test: verify: DIF not generated, APPTAG check ...[2024-06-09 22:49:34.596807] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:06.530 [2024-06-09 22:49:34.596822] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:06.530 passed 00:07:06.530 Test: verify: DIF not generated, REFTAG check ...[2024-06-09 22:49:34.596837] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:06.530 [2024-06-09 22:49:34.596851] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:06.530 passed 00:07:06.530 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:06.530 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-09 22:49:34.596893] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:06.530 passed 00:07:06.530 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:06.530 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:06.530 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:06.530 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-09 22:49:34.597006] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:06.530 passed 00:07:06.530 Test: generate copy: DIF generated, GUARD check ...passed 00:07:06.530 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:06.530 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:06.530 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:06.530 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:06.530 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:06.530 Test: generate copy: iovecs-len validate ...[2024-06-09 22:49:34.597198] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:06.530 passed 00:07:06.530 Test: generate copy: buffer alignment validate ...passed 00:07:06.530 00:07:06.530 Run Summary: Type Total Ran Passed Failed Inactive 00:07:06.530 suites 1 1 n/a 0 0 00:07:06.530 tests 20 20 20 0 0 00:07:06.530 asserts 204 204 204 0 n/a 00:07:06.530 00:07:06.530 Elapsed time = 0.000 seconds 00:07:06.792 00:07:06.792 real 0m0.343s 00:07:06.792 user 0m0.496s 00:07:06.792 sys 0m0.108s 00:07:06.792 22:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.792 22:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:06.792 ************************************ 00:07:06.792 END TEST accel_dif_functional_tests 00:07:06.792 ************************************ 00:07:06.792 00:07:06.792 real 0m54.751s 00:07:06.792 user 1m3.318s 00:07:06.792 sys 0m5.621s 00:07:06.792 22:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.792 22:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:06.792 ************************************ 00:07:06.792 END TEST accel 00:07:06.792 ************************************ 00:07:06.792 22:49:34 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:06.792 22:49:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:06.792 22:49:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.792 22:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:06.792 ************************************ 00:07:06.792 START TEST accel_rpc 00:07:06.792 ************************************ 00:07:06.792 22:49:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:06.792 * Looking for test storage... 00:07:06.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:06.792 22:49:34 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:06.792 22:49:34 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3903768 00:07:06.792 22:49:34 -- accel/accel_rpc.sh@15 -- # waitforlisten 3903768 00:07:06.792 22:49:34 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:06.792 22:49:34 -- common/autotest_common.sh@819 -- # '[' -z 3903768 ']' 00:07:06.792 22:49:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.792 22:49:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:06.792 22:49:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.792 22:49:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:06.792 22:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:06.792 [2024-06-09 22:49:34.930162] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:06.792 [2024-06-09 22:49:34.930222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903768 ] 00:07:06.792 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.053 [2024-06-09 22:49:34.992061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.053 [2024-06-09 22:49:35.054819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:07.053 [2024-06-09 22:49:35.054945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.626 22:49:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:07.626 22:49:35 -- common/autotest_common.sh@852 -- # return 0 00:07:07.626 22:49:35 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:07.626 22:49:35 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:07.626 22:49:35 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:07.626 22:49:35 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:07.626 22:49:35 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:07.626 22:49:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:07.626 22:49:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.626 22:49:35 -- common/autotest_common.sh@10 -- # set +x 00:07:07.626 ************************************ 00:07:07.626 START TEST accel_assign_opcode 00:07:07.626 ************************************ 00:07:07.626 22:49:35 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:07.626 22:49:35 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:07.626 22:49:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:07.626 22:49:35 -- common/autotest_common.sh@10 -- # set +x 00:07:07.626 [2024-06-09 22:49:35.684765] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:07.626 22:49:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:07.626 22:49:35 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:07.626 22:49:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:07.626 22:49:35 -- common/autotest_common.sh@10 -- # set +x 00:07:07.626 [2024-06-09 22:49:35.696797] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:07.626 22:49:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:07.626 22:49:35 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:07.626 22:49:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:07.626 22:49:35 -- common/autotest_common.sh@10 -- # set +x 00:07:07.888 22:49:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:07.888 22:49:35 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:07.888 22:49:35 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:07.888 22:49:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:07.888 22:49:35 -- common/autotest_common.sh@10 -- # set +x 00:07:07.888 22:49:35 -- accel/accel_rpc.sh@42 -- # grep software 00:07:07.888 22:49:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:07.888 software 00:07:07.888 00:07:07.888 real 0m0.210s 00:07:07.888 user 0m0.052s 00:07:07.888 sys 0m0.008s 00:07:07.888 22:49:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.888 22:49:35 -- common/autotest_common.sh@10 -- # set +x 00:07:07.888 ************************************ 00:07:07.888 END TEST accel_assign_opcode 00:07:07.888 ************************************ 00:07:07.888 22:49:35 -- accel/accel_rpc.sh@55 -- # killprocess 3903768 00:07:07.888 22:49:35 -- common/autotest_common.sh@926 -- # '[' -z 3903768 ']' 00:07:07.888 22:49:35 -- common/autotest_common.sh@930 -- # kill -0 3903768 00:07:07.888 22:49:35 -- common/autotest_common.sh@931 -- # uname 00:07:07.888 22:49:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:07.888 22:49:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3903768 00:07:07.888 22:49:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:07.888 22:49:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:07.888 22:49:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3903768' 00:07:07.888 killing process with pid 3903768 00:07:07.888 22:49:35 -- common/autotest_common.sh@945 -- # kill 3903768 00:07:07.888 22:49:35 -- common/autotest_common.sh@950 -- # wait 3903768 00:07:08.149 00:07:08.149 real 0m1.396s 00:07:08.149 user 0m1.462s 00:07:08.149 sys 0m0.365s 00:07:08.149 22:49:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.149 22:49:36 -- common/autotest_common.sh@10 -- # set +x 00:07:08.149 ************************************ 00:07:08.149 END TEST accel_rpc 00:07:08.149 ************************************ 00:07:08.149 22:49:36 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:08.149 22:49:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:08.149 22:49:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.149 22:49:36 -- common/autotest_common.sh@10 -- # set +x 00:07:08.150 ************************************ 00:07:08.150 START TEST app_cmdline 00:07:08.150 ************************************ 00:07:08.150 22:49:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:08.150 * Looking for test storage... 00:07:08.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:08.150 22:49:36 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:08.150 22:49:36 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3904180 00:07:08.150 22:49:36 -- app/cmdline.sh@18 -- # waitforlisten 3904180 00:07:08.150 22:49:36 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:08.150 22:49:36 -- common/autotest_common.sh@819 -- # '[' -z 3904180 ']' 00:07:08.150 22:49:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.150 22:49:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:08.150 22:49:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.150 22:49:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:08.150 22:49:36 -- common/autotest_common.sh@10 -- # set +x 00:07:08.410 [2024-06-09 22:49:36.365517] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:08.410 [2024-06-09 22:49:36.365597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3904180 ] 00:07:08.410 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.410 [2024-06-09 22:49:36.428572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.410 [2024-06-09 22:49:36.500601] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:08.410 [2024-06-09 22:49:36.500736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.983 22:49:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:08.984 22:49:37 -- common/autotest_common.sh@852 -- # return 0 00:07:08.984 22:49:37 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:09.245 { 00:07:09.245 "version": "SPDK v24.01.1-pre git sha1 130b9406a", 00:07:09.245 "fields": { 00:07:09.245 "major": 24, 00:07:09.245 "minor": 1, 00:07:09.245 "patch": 1, 00:07:09.245 "suffix": "-pre", 00:07:09.245 "commit": "130b9406a" 00:07:09.245 } 00:07:09.245 } 00:07:09.245 22:49:37 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:09.245 22:49:37 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:09.245 22:49:37 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:09.245 22:49:37 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:09.245 22:49:37 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:09.245 22:49:37 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:09.245 22:49:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:09.245 22:49:37 -- app/cmdline.sh@26 -- # sort 00:07:09.245 22:49:37 -- common/autotest_common.sh@10 -- # set +x 00:07:09.245 22:49:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:09.245 22:49:37 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:09.245 22:49:37 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:09.245 22:49:37 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.245 22:49:37 -- common/autotest_common.sh@640 -- # local es=0 00:07:09.245 22:49:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.245 22:49:37 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.245 22:49:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:09.245 22:49:37 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.245 22:49:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:09.245 22:49:37 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.245 22:49:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:09.245 22:49:37 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:09.245 22:49:37 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:09.246 22:49:37 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.507 request: 00:07:09.507 { 00:07:09.507 "method": "env_dpdk_get_mem_stats", 00:07:09.507 "req_id": 1 00:07:09.507 } 00:07:09.507 Got JSON-RPC error response 00:07:09.507 response: 00:07:09.507 { 00:07:09.507 "code": -32601, 00:07:09.507 "message": "Method not found" 00:07:09.507 } 00:07:09.507 22:49:37 -- common/autotest_common.sh@643 -- # es=1 00:07:09.507 22:49:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:09.507 22:49:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:09.507 22:49:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:09.507 22:49:37 -- app/cmdline.sh@1 -- # killprocess 3904180 00:07:09.507 22:49:37 -- common/autotest_common.sh@926 -- # '[' -z 3904180 ']' 00:07:09.507 22:49:37 -- common/autotest_common.sh@930 -- # kill -0 3904180 00:07:09.507 22:49:37 -- common/autotest_common.sh@931 -- # uname 00:07:09.507 22:49:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:09.507 22:49:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3904180 00:07:09.507 22:49:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:09.507 22:49:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:09.507 22:49:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3904180' 00:07:09.507 killing process with pid 3904180 00:07:09.507 22:49:37 -- common/autotest_common.sh@945 -- # kill 3904180 00:07:09.507 22:49:37 -- common/autotest_common.sh@950 -- # wait 3904180 00:07:09.769 00:07:09.769 real 0m1.503s 00:07:09.769 user 0m1.795s 00:07:09.769 sys 0m0.378s 00:07:09.769 22:49:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.769 22:49:37 -- common/autotest_common.sh@10 -- # set +x 00:07:09.769 ************************************ 00:07:09.769 END TEST app_cmdline 00:07:09.769 ************************************ 00:07:09.769 22:49:37 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:09.769 22:49:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:09.769 22:49:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.769 22:49:37 -- common/autotest_common.sh@10 -- # set +x 00:07:09.769 ************************************ 00:07:09.769 START TEST version 00:07:09.769 ************************************ 00:07:09.769 22:49:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:09.769 * Looking for test storage... 00:07:09.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:09.769 22:49:37 -- app/version.sh@17 -- # get_header_version major 00:07:09.769 22:49:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:09.769 22:49:37 -- app/version.sh@14 -- # cut -f2 00:07:09.769 22:49:37 -- app/version.sh@14 -- # tr -d '"' 00:07:09.769 22:49:37 -- app/version.sh@17 -- # major=24 00:07:09.769 22:49:37 -- app/version.sh@18 -- # get_header_version minor 00:07:09.769 22:49:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:09.769 22:49:37 -- app/version.sh@14 -- # cut -f2 00:07:09.769 22:49:37 -- app/version.sh@14 -- # tr -d '"' 00:07:09.769 22:49:37 -- app/version.sh@18 -- # minor=1 00:07:09.769 22:49:37 -- app/version.sh@19 -- # get_header_version patch 00:07:09.769 22:49:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:09.769 22:49:37 -- app/version.sh@14 -- # cut -f2 00:07:09.769 22:49:37 -- app/version.sh@14 -- # tr -d '"' 00:07:09.769 22:49:37 -- app/version.sh@19 -- # patch=1 00:07:09.769 22:49:37 -- app/version.sh@20 -- # get_header_version suffix 00:07:09.769 22:49:37 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:09.769 22:49:37 -- app/version.sh@14 -- # cut -f2 00:07:09.769 22:49:37 -- app/version.sh@14 -- # tr -d '"' 00:07:09.769 22:49:37 -- app/version.sh@20 -- # suffix=-pre 00:07:09.769 22:49:37 -- app/version.sh@22 -- # version=24.1 00:07:09.769 22:49:37 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:09.769 22:49:37 -- app/version.sh@25 -- # version=24.1.1 00:07:09.769 22:49:37 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:09.769 22:49:37 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:09.769 22:49:37 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:09.769 22:49:37 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:09.769 22:49:37 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:09.769 00:07:09.769 real 0m0.167s 00:07:09.769 user 0m0.084s 00:07:09.769 sys 0m0.120s 00:07:09.769 22:49:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.769 22:49:37 -- common/autotest_common.sh@10 -- # set +x 00:07:09.769 ************************************ 00:07:09.769 END TEST version 00:07:09.769 ************************************ 00:07:10.031 22:49:37 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:10.031 22:49:37 -- spdk/autotest.sh@204 -- # uname -s 00:07:10.031 22:49:37 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:10.031 22:49:37 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:10.031 22:49:37 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:10.031 22:49:37 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:10.031 22:49:37 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:10.031 22:49:37 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:10.031 22:49:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:10.031 22:49:37 -- common/autotest_common.sh@10 -- # set +x 00:07:10.031 22:49:38 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:10.031 22:49:38 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:10.031 22:49:38 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:10.031 22:49:38 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:10.031 22:49:38 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:10.031 22:49:38 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:10.031 22:49:38 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:10.031 22:49:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:10.031 22:49:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.031 22:49:38 -- common/autotest_common.sh@10 -- # set +x 00:07:10.031 ************************************ 00:07:10.031 START TEST nvmf_tcp 00:07:10.031 ************************************ 00:07:10.031 22:49:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:10.031 * Looking for test storage... 00:07:10.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:10.031 22:49:38 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:10.031 22:49:38 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:10.031 22:49:38 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.031 22:49:38 -- nvmf/common.sh@7 -- # uname -s 00:07:10.031 22:49:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.031 22:49:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.031 22:49:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.031 22:49:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.031 22:49:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.031 22:49:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.031 22:49:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.031 22:49:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.031 22:49:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.031 22:49:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.031 22:49:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:10.031 22:49:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:10.031 22:49:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.031 22:49:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.031 22:49:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.031 22:49:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.031 22:49:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.031 22:49:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.031 22:49:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.031 22:49:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.031 22:49:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.031 22:49:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.031 22:49:38 -- paths/export.sh@5 -- # export PATH 00:07:10.031 22:49:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.031 22:49:38 -- nvmf/common.sh@46 -- # : 0 00:07:10.031 22:49:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:10.031 22:49:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:10.032 22:49:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:10.032 22:49:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.032 22:49:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.032 22:49:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:10.032 22:49:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:10.032 22:49:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:10.032 22:49:38 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:10.032 22:49:38 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:10.032 22:49:38 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:10.032 22:49:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:10.032 22:49:38 -- common/autotest_common.sh@10 -- # set +x 00:07:10.032 22:49:38 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:10.032 22:49:38 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:10.032 22:49:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:10.032 22:49:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.032 22:49:38 -- common/autotest_common.sh@10 -- # set +x 00:07:10.032 ************************************ 00:07:10.032 START TEST nvmf_example 00:07:10.032 ************************************ 00:07:10.032 22:49:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:10.294 * Looking for test storage... 00:07:10.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.294 22:49:38 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.294 22:49:38 -- nvmf/common.sh@7 -- # uname -s 00:07:10.294 22:49:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.294 22:49:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.294 22:49:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.294 22:49:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.294 22:49:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.294 22:49:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.294 22:49:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.294 22:49:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.294 22:49:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.294 22:49:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.294 22:49:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:10.294 22:49:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:10.294 22:49:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.294 22:49:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.294 22:49:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.294 22:49:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.294 22:49:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.294 22:49:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.294 22:49:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.294 22:49:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.294 22:49:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.294 22:49:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.294 22:49:38 -- paths/export.sh@5 -- # export PATH 00:07:10.294 22:49:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.294 22:49:38 -- nvmf/common.sh@46 -- # : 0 00:07:10.294 22:49:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:10.294 22:49:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:10.294 22:49:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:10.294 22:49:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.294 22:49:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.294 22:49:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:10.294 22:49:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:10.294 22:49:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:10.294 22:49:38 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:10.294 22:49:38 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:10.294 22:49:38 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:10.294 22:49:38 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:10.294 22:49:38 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:10.294 22:49:38 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:10.294 22:49:38 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:10.294 22:49:38 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:10.294 22:49:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:10.294 22:49:38 -- common/autotest_common.sh@10 -- # set +x 00:07:10.294 22:49:38 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:10.294 22:49:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:10.294 22:49:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.294 22:49:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:10.294 22:49:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:10.294 22:49:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:10.294 22:49:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.294 22:49:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:10.294 22:49:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.294 22:49:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:10.294 22:49:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:10.294 22:49:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:10.294 22:49:38 -- common/autotest_common.sh@10 -- # set +x 00:07:16.890 22:49:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:16.890 22:49:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:16.890 22:49:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:16.890 22:49:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:16.890 22:49:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:16.890 22:49:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:16.890 22:49:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:16.890 22:49:44 -- nvmf/common.sh@294 -- # net_devs=() 00:07:16.890 22:49:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:16.890 22:49:44 -- nvmf/common.sh@295 -- # e810=() 00:07:16.890 22:49:44 -- nvmf/common.sh@295 -- # local -ga e810 00:07:16.890 22:49:44 -- nvmf/common.sh@296 -- # x722=() 00:07:16.890 22:49:44 -- nvmf/common.sh@296 -- # local -ga x722 00:07:16.890 22:49:44 -- nvmf/common.sh@297 -- # mlx=() 00:07:16.890 22:49:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:16.890 22:49:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:16.890 22:49:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:16.890 22:49:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:16.890 22:49:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:16.890 22:49:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:16.890 22:49:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:16.890 22:49:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:16.890 22:49:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:16.890 22:49:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:16.890 22:49:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:16.890 22:49:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:16.890 22:49:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:16.890 22:49:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:16.890 22:49:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:16.890 22:49:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:16.890 22:49:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:16.890 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:16.890 22:49:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:16.890 22:49:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:16.890 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:16.890 22:49:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:16.890 22:49:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:16.890 22:49:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.890 22:49:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:16.890 22:49:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.890 22:49:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:16.890 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:16.890 22:49:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.890 22:49:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:16.890 22:49:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.890 22:49:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:16.890 22:49:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.890 22:49:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:16.890 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:16.890 22:49:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.890 22:49:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:16.890 22:49:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:16.890 22:49:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:16.890 22:49:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.890 22:49:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.890 22:49:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:16.890 22:49:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:16.890 22:49:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:16.890 22:49:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:16.890 22:49:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:16.890 22:49:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:16.890 22:49:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.890 22:49:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:16.890 22:49:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:16.890 22:49:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:16.890 22:49:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:16.890 22:49:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:16.890 22:49:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:16.890 22:49:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:16.890 22:49:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:16.890 22:49:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:16.890 22:49:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:16.890 22:49:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:16.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:07:16.890 00:07:16.890 --- 10.0.0.2 ping statistics --- 00:07:16.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.890 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:07:16.890 22:49:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:16.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.441 ms 00:07:16.890 00:07:16.890 --- 10.0.0.1 ping statistics --- 00:07:16.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.890 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:07:16.890 22:49:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.890 22:49:44 -- nvmf/common.sh@410 -- # return 0 00:07:16.890 22:49:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:16.890 22:49:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.890 22:49:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:16.890 22:49:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.890 22:49:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:16.890 22:49:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:16.890 22:49:45 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:16.891 22:49:45 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:16.891 22:49:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:16.891 22:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:16.891 22:49:45 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:16.891 22:49:45 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:16.891 22:49:45 -- target/nvmf_example.sh@34 -- # nvmfpid=3908281 00:07:16.891 22:49:45 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:16.891 22:49:45 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:16.891 22:49:45 -- target/nvmf_example.sh@36 -- # waitforlisten 3908281 00:07:16.891 22:49:45 -- common/autotest_common.sh@819 -- # '[' -z 3908281 ']' 00:07:16.891 22:49:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.891 22:49:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:16.891 22:49:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.891 22:49:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:16.891 22:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:17.153 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.726 22:49:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:17.726 22:49:45 -- common/autotest_common.sh@852 -- # return 0 00:07:17.726 22:49:45 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:17.726 22:49:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:17.726 22:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:17.726 22:49:45 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:17.726 22:49:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.726 22:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:17.726 22:49:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.726 22:49:45 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:17.726 22:49:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.726 22:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:17.988 22:49:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.988 22:49:45 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:17.988 22:49:45 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:17.988 22:49:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.988 22:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:17.988 22:49:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.988 22:49:45 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:17.988 22:49:45 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:17.988 22:49:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.988 22:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:17.988 22:49:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.988 22:49:45 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:17.988 22:49:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.988 22:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:17.988 22:49:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.988 22:49:45 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:17.988 22:49:45 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:17.988 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.261 Initializing NVMe Controllers 00:07:30.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:30.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:30.261 Initialization complete. Launching workers. 00:07:30.261 ======================================================== 00:07:30.261 Latency(us) 00:07:30.261 Device Information : IOPS MiB/s Average min max 00:07:30.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13883.44 54.23 4610.91 901.60 15319.28 00:07:30.261 ======================================================== 00:07:30.261 Total : 13883.44 54.23 4610.91 901.60 15319.28 00:07:30.261 00:07:30.261 22:49:56 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:30.261 22:49:56 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:30.261 22:49:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:30.261 22:49:56 -- nvmf/common.sh@116 -- # sync 00:07:30.261 22:49:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:30.261 22:49:56 -- nvmf/common.sh@119 -- # set +e 00:07:30.261 22:49:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:30.261 22:49:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:30.261 rmmod nvme_tcp 00:07:30.261 rmmod nvme_fabrics 00:07:30.261 rmmod nvme_keyring 00:07:30.261 22:49:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:30.261 22:49:56 -- nvmf/common.sh@123 -- # set -e 00:07:30.261 22:49:56 -- nvmf/common.sh@124 -- # return 0 00:07:30.261 22:49:56 -- nvmf/common.sh@477 -- # '[' -n 3908281 ']' 00:07:30.261 22:49:56 -- nvmf/common.sh@478 -- # killprocess 3908281 00:07:30.261 22:49:56 -- common/autotest_common.sh@926 -- # '[' -z 3908281 ']' 00:07:30.261 22:49:56 -- common/autotest_common.sh@930 -- # kill -0 3908281 00:07:30.261 22:49:56 -- common/autotest_common.sh@931 -- # uname 00:07:30.261 22:49:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:30.261 22:49:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3908281 00:07:30.261 22:49:56 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:30.261 22:49:56 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:30.261 22:49:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3908281' 00:07:30.261 killing process with pid 3908281 00:07:30.261 22:49:56 -- common/autotest_common.sh@945 -- # kill 3908281 00:07:30.261 22:49:56 -- common/autotest_common.sh@950 -- # wait 3908281 00:07:30.261 nvmf threads initialize successfully 00:07:30.261 bdev subsystem init successfully 00:07:30.261 created a nvmf target service 00:07:30.261 create targets's poll groups done 00:07:30.261 all subsystems of target started 00:07:30.261 nvmf target is running 00:07:30.261 all subsystems of target stopped 00:07:30.261 destroy targets's poll groups done 00:07:30.261 destroyed the nvmf target service 00:07:30.261 bdev subsystem finish successfully 00:07:30.261 nvmf threads destroy successfully 00:07:30.261 22:49:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:30.261 22:49:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:30.261 22:49:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:30.261 22:49:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:30.261 22:49:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:30.261 22:49:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.261 22:49:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.261 22:49:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.523 22:49:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:30.523 22:49:58 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:30.523 22:49:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:30.523 22:49:58 -- common/autotest_common.sh@10 -- # set +x 00:07:30.523 00:07:30.523 real 0m20.418s 00:07:30.523 user 0m46.492s 00:07:30.523 sys 0m5.972s 00:07:30.523 22:49:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.523 22:49:58 -- common/autotest_common.sh@10 -- # set +x 00:07:30.523 ************************************ 00:07:30.523 END TEST nvmf_example 00:07:30.523 ************************************ 00:07:30.523 22:49:58 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:30.523 22:49:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:30.523 22:49:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:30.523 22:49:58 -- common/autotest_common.sh@10 -- # set +x 00:07:30.523 ************************************ 00:07:30.523 START TEST nvmf_filesystem 00:07:30.523 ************************************ 00:07:30.523 22:49:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:30.786 * Looking for test storage... 00:07:30.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.786 22:49:58 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:30.786 22:49:58 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:30.786 22:49:58 -- common/autotest_common.sh@34 -- # set -e 00:07:30.786 22:49:58 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:30.786 22:49:58 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:30.786 22:49:58 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:30.786 22:49:58 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:30.786 22:49:58 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:30.786 22:49:58 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:30.786 22:49:58 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:30.786 22:49:58 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:30.786 22:49:58 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:30.786 22:49:58 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:30.786 22:49:58 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:30.786 22:49:58 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:30.786 22:49:58 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:30.786 22:49:58 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:30.786 22:49:58 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:30.786 22:49:58 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:30.786 22:49:58 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:30.786 22:49:58 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:30.786 22:49:58 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:30.786 22:49:58 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:30.786 22:49:58 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:30.786 22:49:58 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:30.786 22:49:58 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:30.786 22:49:58 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:30.786 22:49:58 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:30.786 22:49:58 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:30.786 22:49:58 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:30.786 22:49:58 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:30.786 22:49:58 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:30.786 22:49:58 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:30.786 22:49:58 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:30.786 22:49:58 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:30.786 22:49:58 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:30.786 22:49:58 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:30.786 22:49:58 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:30.786 22:49:58 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:30.786 22:49:58 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:30.786 22:49:58 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:30.786 22:49:58 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:30.786 22:49:58 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:30.786 22:49:58 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:30.786 22:49:58 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:30.786 22:49:58 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:30.786 22:49:58 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:30.786 22:49:58 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:30.786 22:49:58 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:30.786 22:49:58 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:30.786 22:49:58 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:30.786 22:49:58 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:30.786 22:49:58 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:30.786 22:49:58 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:30.786 22:49:58 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:30.786 22:49:58 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:30.786 22:49:58 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:30.786 22:49:58 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:30.786 22:49:58 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:30.786 22:49:58 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:30.786 22:49:58 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:30.786 22:49:58 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:30.786 22:49:58 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:30.786 22:49:58 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:30.786 22:49:58 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:30.786 22:49:58 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:30.786 22:49:58 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:30.786 22:49:58 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:30.786 22:49:58 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:30.786 22:49:58 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:30.786 22:49:58 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:30.786 22:49:58 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:30.786 22:49:58 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:30.786 22:49:58 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:30.786 22:49:58 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:30.786 22:49:58 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:30.786 22:49:58 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:30.786 22:49:58 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:30.786 22:49:58 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:30.786 22:49:58 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:30.786 22:49:58 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:30.786 22:49:58 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:30.786 22:49:58 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:30.786 22:49:58 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:30.786 22:49:58 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:30.786 22:49:58 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:30.786 22:49:58 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:30.786 22:49:58 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:30.786 22:49:58 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:30.786 22:49:58 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:30.786 22:49:58 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:30.786 22:49:58 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:30.786 22:49:58 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:30.786 22:49:58 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:30.786 22:49:58 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:30.786 22:49:58 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:30.786 22:49:58 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:30.786 22:49:58 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:30.786 22:49:58 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:30.786 22:49:58 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:30.786 22:49:58 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:30.786 22:49:58 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:30.786 #define SPDK_CONFIG_H 00:07:30.786 #define SPDK_CONFIG_APPS 1 00:07:30.786 #define SPDK_CONFIG_ARCH native 00:07:30.786 #undef SPDK_CONFIG_ASAN 00:07:30.786 #undef SPDK_CONFIG_AVAHI 00:07:30.786 #undef SPDK_CONFIG_CET 00:07:30.786 #define SPDK_CONFIG_COVERAGE 1 00:07:30.786 #define SPDK_CONFIG_CROSS_PREFIX 00:07:30.786 #undef SPDK_CONFIG_CRYPTO 00:07:30.786 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:30.786 #undef SPDK_CONFIG_CUSTOMOCF 00:07:30.786 #undef SPDK_CONFIG_DAOS 00:07:30.786 #define SPDK_CONFIG_DAOS_DIR 00:07:30.786 #define SPDK_CONFIG_DEBUG 1 00:07:30.786 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:30.786 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:30.786 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:30.786 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:30.786 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:30.786 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:30.786 #define SPDK_CONFIG_EXAMPLES 1 00:07:30.786 #undef SPDK_CONFIG_FC 00:07:30.786 #define SPDK_CONFIG_FC_PATH 00:07:30.786 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:30.786 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:30.786 #undef SPDK_CONFIG_FUSE 00:07:30.786 #undef SPDK_CONFIG_FUZZER 00:07:30.786 #define SPDK_CONFIG_FUZZER_LIB 00:07:30.786 #undef SPDK_CONFIG_GOLANG 00:07:30.786 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:30.786 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:30.786 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:30.786 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:30.786 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:30.786 #define SPDK_CONFIG_IDXD 1 00:07:30.786 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:30.786 #undef SPDK_CONFIG_IPSEC_MB 00:07:30.786 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:30.786 #define SPDK_CONFIG_ISAL 1 00:07:30.786 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:30.786 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:30.786 #define SPDK_CONFIG_LIBDIR 00:07:30.786 #undef SPDK_CONFIG_LTO 00:07:30.786 #define SPDK_CONFIG_MAX_LCORES 00:07:30.786 #define SPDK_CONFIG_NVME_CUSE 1 00:07:30.786 #undef SPDK_CONFIG_OCF 00:07:30.786 #define SPDK_CONFIG_OCF_PATH 00:07:30.786 #define SPDK_CONFIG_OPENSSL_PATH 00:07:30.786 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:30.787 #undef SPDK_CONFIG_PGO_USE 00:07:30.787 #define SPDK_CONFIG_PREFIX /usr/local 00:07:30.787 #undef SPDK_CONFIG_RAID5F 00:07:30.787 #undef SPDK_CONFIG_RBD 00:07:30.787 #define SPDK_CONFIG_RDMA 1 00:07:30.787 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:30.787 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:30.787 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:30.787 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:30.787 #define SPDK_CONFIG_SHARED 1 00:07:30.787 #undef SPDK_CONFIG_SMA 00:07:30.787 #define SPDK_CONFIG_TESTS 1 00:07:30.787 #undef SPDK_CONFIG_TSAN 00:07:30.787 #define SPDK_CONFIG_UBLK 1 00:07:30.787 #define SPDK_CONFIG_UBSAN 1 00:07:30.787 #undef SPDK_CONFIG_UNIT_TESTS 00:07:30.787 #undef SPDK_CONFIG_URING 00:07:30.787 #define SPDK_CONFIG_URING_PATH 00:07:30.787 #undef SPDK_CONFIG_URING_ZNS 00:07:30.787 #undef SPDK_CONFIG_USDT 00:07:30.787 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:30.787 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:30.787 #undef SPDK_CONFIG_VFIO_USER 00:07:30.787 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:30.787 #define SPDK_CONFIG_VHOST 1 00:07:30.787 #define SPDK_CONFIG_VIRTIO 1 00:07:30.787 #undef SPDK_CONFIG_VTUNE 00:07:30.787 #define SPDK_CONFIG_VTUNE_DIR 00:07:30.787 #define SPDK_CONFIG_WERROR 1 00:07:30.787 #define SPDK_CONFIG_WPDK_DIR 00:07:30.787 #undef SPDK_CONFIG_XNVME 00:07:30.787 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:30.787 22:49:58 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:30.787 22:49:58 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.787 22:49:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.787 22:49:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.787 22:49:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.787 22:49:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.787 22:49:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.787 22:49:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.787 22:49:58 -- paths/export.sh@5 -- # export PATH 00:07:30.787 22:49:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.787 22:49:58 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:30.787 22:49:58 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:30.787 22:49:58 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:30.787 22:49:58 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:30.787 22:49:58 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:30.787 22:49:58 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:30.787 22:49:58 -- pm/common@16 -- # TEST_TAG=N/A 00:07:30.787 22:49:58 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:30.787 22:49:58 -- common/autotest_common.sh@52 -- # : 1 00:07:30.787 22:49:58 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:30.787 22:49:58 -- common/autotest_common.sh@56 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:30.787 22:49:58 -- common/autotest_common.sh@58 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:30.787 22:49:58 -- common/autotest_common.sh@60 -- # : 1 00:07:30.787 22:49:58 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:30.787 22:49:58 -- common/autotest_common.sh@62 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:30.787 22:49:58 -- common/autotest_common.sh@64 -- # : 00:07:30.787 22:49:58 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:30.787 22:49:58 -- common/autotest_common.sh@66 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:30.787 22:49:58 -- common/autotest_common.sh@68 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:30.787 22:49:58 -- common/autotest_common.sh@70 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:30.787 22:49:58 -- common/autotest_common.sh@72 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:30.787 22:49:58 -- common/autotest_common.sh@74 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:30.787 22:49:58 -- common/autotest_common.sh@76 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:30.787 22:49:58 -- common/autotest_common.sh@78 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:30.787 22:49:58 -- common/autotest_common.sh@80 -- # : 1 00:07:30.787 22:49:58 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:30.787 22:49:58 -- common/autotest_common.sh@82 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:30.787 22:49:58 -- common/autotest_common.sh@84 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:30.787 22:49:58 -- common/autotest_common.sh@86 -- # : 1 00:07:30.787 22:49:58 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:30.787 22:49:58 -- common/autotest_common.sh@88 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:30.787 22:49:58 -- common/autotest_common.sh@90 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:30.787 22:49:58 -- common/autotest_common.sh@92 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:30.787 22:49:58 -- common/autotest_common.sh@94 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:30.787 22:49:58 -- common/autotest_common.sh@96 -- # : tcp 00:07:30.787 22:49:58 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:30.787 22:49:58 -- common/autotest_common.sh@98 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:30.787 22:49:58 -- common/autotest_common.sh@100 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:30.787 22:49:58 -- common/autotest_common.sh@102 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:30.787 22:49:58 -- common/autotest_common.sh@104 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:30.787 22:49:58 -- common/autotest_common.sh@106 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:30.787 22:49:58 -- common/autotest_common.sh@108 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:30.787 22:49:58 -- common/autotest_common.sh@110 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:30.787 22:49:58 -- common/autotest_common.sh@112 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:30.787 22:49:58 -- common/autotest_common.sh@114 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:30.787 22:49:58 -- common/autotest_common.sh@116 -- # : 1 00:07:30.787 22:49:58 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:30.787 22:49:58 -- common/autotest_common.sh@118 -- # : 00:07:30.787 22:49:58 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:30.787 22:49:58 -- common/autotest_common.sh@120 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:30.787 22:49:58 -- common/autotest_common.sh@122 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:30.787 22:49:58 -- common/autotest_common.sh@124 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:30.787 22:49:58 -- common/autotest_common.sh@126 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:30.787 22:49:58 -- common/autotest_common.sh@128 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:30.787 22:49:58 -- common/autotest_common.sh@130 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:30.787 22:49:58 -- common/autotest_common.sh@132 -- # : 00:07:30.787 22:49:58 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:30.787 22:49:58 -- common/autotest_common.sh@134 -- # : true 00:07:30.787 22:49:58 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:30.787 22:49:58 -- common/autotest_common.sh@136 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:30.787 22:49:58 -- common/autotest_common.sh@138 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:30.787 22:49:58 -- common/autotest_common.sh@140 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:30.787 22:49:58 -- common/autotest_common.sh@142 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:30.787 22:49:58 -- common/autotest_common.sh@144 -- # : 0 00:07:30.787 22:49:58 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:30.787 22:49:58 -- common/autotest_common.sh@146 -- # : 0 00:07:30.788 22:49:58 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:30.788 22:49:58 -- common/autotest_common.sh@148 -- # : e810 00:07:30.788 22:49:58 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:30.788 22:49:58 -- common/autotest_common.sh@150 -- # : 0 00:07:30.788 22:49:58 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:30.788 22:49:58 -- common/autotest_common.sh@152 -- # : 0 00:07:30.788 22:49:58 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:30.788 22:49:58 -- common/autotest_common.sh@154 -- # : 0 00:07:30.788 22:49:58 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:30.788 22:49:58 -- common/autotest_common.sh@156 -- # : 0 00:07:30.788 22:49:58 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:30.788 22:49:58 -- common/autotest_common.sh@158 -- # : 0 00:07:30.788 22:49:58 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:30.788 22:49:58 -- common/autotest_common.sh@160 -- # : 0 00:07:30.788 22:49:58 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:30.788 22:49:58 -- common/autotest_common.sh@163 -- # : 00:07:30.788 22:49:58 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:30.788 22:49:58 -- common/autotest_common.sh@165 -- # : 0 00:07:30.788 22:49:58 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:30.788 22:49:58 -- common/autotest_common.sh@167 -- # : 0 00:07:30.788 22:49:58 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:30.788 22:49:58 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:30.788 22:49:58 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:30.788 22:49:58 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:30.788 22:49:58 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:30.788 22:49:58 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:30.788 22:49:58 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:30.788 22:49:58 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:30.788 22:49:58 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:30.788 22:49:58 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:30.788 22:49:58 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:30.788 22:49:58 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:30.788 22:49:58 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:30.788 22:49:58 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:30.788 22:49:58 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:30.788 22:49:58 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:30.788 22:49:58 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:30.788 22:49:58 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:30.788 22:49:58 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:30.788 22:49:58 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:30.788 22:49:58 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:30.788 22:49:58 -- common/autotest_common.sh@196 -- # cat 00:07:30.788 22:49:58 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:30.788 22:49:58 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:30.788 22:49:58 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:30.788 22:49:58 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:30.788 22:49:58 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:30.788 22:49:58 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:30.788 22:49:58 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:30.788 22:49:58 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:30.788 22:49:58 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:30.788 22:49:58 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:30.788 22:49:58 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:30.788 22:49:58 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:30.788 22:49:58 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:30.788 22:49:58 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:30.788 22:49:58 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:30.788 22:49:58 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:30.788 22:49:58 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:30.788 22:49:58 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:30.788 22:49:58 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:30.788 22:49:58 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:30.788 22:49:58 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:30.788 22:49:58 -- common/autotest_common.sh@249 -- # valgrind= 00:07:30.788 22:49:58 -- common/autotest_common.sh@255 -- # uname -s 00:07:30.788 22:49:58 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:30.788 22:49:58 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:30.788 22:49:58 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:30.788 22:49:58 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:30.788 22:49:58 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:30.788 22:49:58 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:30.788 22:49:58 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:30.788 22:49:58 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j144 00:07:30.788 22:49:58 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:30.788 22:49:58 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:30.788 22:49:58 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:30.788 22:49:58 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:30.788 22:49:58 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:30.788 22:49:58 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:30.788 22:49:58 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:30.788 22:49:58 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:30.788 22:49:58 -- common/autotest_common.sh@309 -- # [[ -z 3911139 ]] 00:07:30.788 22:49:58 -- common/autotest_common.sh@309 -- # kill -0 3911139 00:07:30.788 22:49:58 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:30.788 22:49:58 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:30.788 22:49:58 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:30.788 22:49:58 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:30.788 22:49:58 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:30.788 22:49:58 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:30.788 22:49:58 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:30.788 22:49:58 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:30.788 22:49:58 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.PMq8wX 00:07:30.788 22:49:58 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:30.788 22:49:58 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:30.788 22:49:58 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:30.788 22:49:58 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.PMq8wX/tests/target /tmp/spdk.PMq8wX 00:07:30.788 22:49:58 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:30.788 22:49:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:30.788 22:49:58 -- common/autotest_common.sh@318 -- # df -T 00:07:30.788 22:49:58 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:30.788 22:49:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:07:30.788 22:49:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:30.788 22:49:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:07:30.788 22:49:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:07:30.788 22:49:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:30.788 22:49:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:30.788 22:49:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:07:30.788 22:49:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:07:30.789 22:49:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=956665856 00:07:30.789 22:49:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:07:30.789 22:49:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=4327763968 00:07:30.789 22:49:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:30.789 22:49:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:07:30.789 22:49:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:07:30.789 22:49:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=118714425344 00:07:30.789 22:49:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=129370980352 00:07:30.789 22:49:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=10656555008 00:07:30.789 22:49:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:30.789 22:49:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:30.789 22:49:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:30.789 22:49:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=64682897408 00:07:30.789 22:49:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64685490176 00:07:30.789 22:49:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:07:30.789 22:49:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:30.789 22:49:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:30.789 22:49:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:30.789 22:49:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=25864499200 00:07:30.789 22:49:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=25874198528 00:07:30.789 22:49:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=9699328 00:07:30.789 22:49:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:30.789 22:49:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=efivarfs 00:07:30.789 22:49:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=efivarfs 00:07:30.789 22:49:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=216064 00:07:30.789 22:49:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=507904 00:07:30.789 22:49:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=287744 00:07:30.789 22:49:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:30.789 22:49:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:30.789 22:49:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:30.789 22:49:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=64683786240 00:07:30.789 22:49:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=64685490176 00:07:30.789 22:49:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=1703936 00:07:30.789 22:49:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:30.789 22:49:58 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:30.789 22:49:58 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:30.789 22:49:58 -- common/autotest_common.sh@353 -- # avails["$mount"]=12937093120 00:07:30.789 22:49:58 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12937097216 00:07:30.789 22:49:58 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:07:30.789 22:49:58 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:30.789 22:49:58 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:30.789 * Looking for test storage... 00:07:30.789 22:49:58 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:30.789 22:49:58 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:30.789 22:49:58 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.789 22:49:58 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:30.789 22:49:58 -- common/autotest_common.sh@363 -- # mount=/ 00:07:30.789 22:49:58 -- common/autotest_common.sh@365 -- # target_space=118714425344 00:07:30.789 22:49:58 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:30.789 22:49:58 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:30.789 22:49:58 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:07:30.789 22:49:58 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:07:30.789 22:49:58 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:07:30.789 22:49:58 -- common/autotest_common.sh@372 -- # new_size=12871147520 00:07:30.789 22:49:58 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:30.789 22:49:58 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.789 22:49:58 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.789 22:49:58 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.789 22:49:58 -- common/autotest_common.sh@380 -- # return 0 00:07:30.789 22:49:58 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:30.789 22:49:58 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:30.789 22:49:58 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:30.789 22:49:58 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:30.789 22:49:58 -- common/autotest_common.sh@1672 -- # true 00:07:30.789 22:49:58 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:30.789 22:49:58 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:30.789 22:49:58 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:30.789 22:49:58 -- common/autotest_common.sh@27 -- # exec 00:07:30.789 22:49:58 -- common/autotest_common.sh@29 -- # exec 00:07:30.789 22:49:58 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:30.789 22:49:58 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:30.789 22:49:58 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:30.789 22:49:58 -- common/autotest_common.sh@18 -- # set -x 00:07:30.789 22:49:58 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.789 22:49:58 -- nvmf/common.sh@7 -- # uname -s 00:07:30.789 22:49:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.789 22:49:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.789 22:49:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.789 22:49:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.789 22:49:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.789 22:49:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.789 22:49:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.789 22:49:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.789 22:49:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.789 22:49:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.789 22:49:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:30.789 22:49:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:30.789 22:49:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.789 22:49:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.789 22:49:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.789 22:49:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.789 22:49:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.789 22:49:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.789 22:49:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.789 22:49:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.789 22:49:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.789 22:49:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.789 22:49:58 -- paths/export.sh@5 -- # export PATH 00:07:30.789 22:49:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.789 22:49:58 -- nvmf/common.sh@46 -- # : 0 00:07:30.789 22:49:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:30.789 22:49:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:30.789 22:49:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:30.789 22:49:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.789 22:49:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.789 22:49:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:30.789 22:49:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:30.789 22:49:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:30.789 22:49:58 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:30.789 22:49:58 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:30.789 22:49:58 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:30.789 22:49:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:30.789 22:49:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.789 22:49:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:30.789 22:49:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:30.789 22:49:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:30.789 22:49:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.789 22:49:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.789 22:49:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.789 22:49:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:30.789 22:49:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:30.789 22:49:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:30.789 22:49:58 -- common/autotest_common.sh@10 -- # set +x 00:07:37.382 22:50:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:37.382 22:50:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:37.382 22:50:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:37.382 22:50:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:37.382 22:50:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:37.382 22:50:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:37.382 22:50:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:37.382 22:50:05 -- nvmf/common.sh@294 -- # net_devs=() 00:07:37.382 22:50:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:37.382 22:50:05 -- nvmf/common.sh@295 -- # e810=() 00:07:37.383 22:50:05 -- nvmf/common.sh@295 -- # local -ga e810 00:07:37.383 22:50:05 -- nvmf/common.sh@296 -- # x722=() 00:07:37.383 22:50:05 -- nvmf/common.sh@296 -- # local -ga x722 00:07:37.383 22:50:05 -- nvmf/common.sh@297 -- # mlx=() 00:07:37.383 22:50:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:37.383 22:50:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.383 22:50:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.383 22:50:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.383 22:50:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.383 22:50:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.383 22:50:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.383 22:50:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.383 22:50:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.383 22:50:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.383 22:50:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.383 22:50:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.383 22:50:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:37.383 22:50:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:37.383 22:50:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:37.383 22:50:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:37.383 22:50:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:37.383 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:37.383 22:50:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:37.383 22:50:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:37.383 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:37.383 22:50:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:37.383 22:50:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:37.383 22:50:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.383 22:50:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:37.383 22:50:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.383 22:50:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:37.383 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:37.383 22:50:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.383 22:50:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:37.383 22:50:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.383 22:50:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:37.383 22:50:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.383 22:50:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:37.383 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:37.383 22:50:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.383 22:50:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:37.383 22:50:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:37.383 22:50:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:37.383 22:50:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:37.383 22:50:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.383 22:50:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.383 22:50:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.383 22:50:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:37.383 22:50:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.383 22:50:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.383 22:50:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:37.383 22:50:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.383 22:50:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.383 22:50:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:37.383 22:50:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:37.383 22:50:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.383 22:50:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.645 22:50:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.645 22:50:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.645 22:50:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:37.645 22:50:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.645 22:50:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.645 22:50:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.645 22:50:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:37.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:07:37.645 00:07:37.645 --- 10.0.0.2 ping statistics --- 00:07:37.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.645 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:07:37.645 22:50:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.420 ms 00:07:37.907 00:07:37.907 --- 10.0.0.1 ping statistics --- 00:07:37.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.907 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:07:37.907 22:50:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.907 22:50:05 -- nvmf/common.sh@410 -- # return 0 00:07:37.907 22:50:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:37.907 22:50:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.907 22:50:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:37.907 22:50:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:37.907 22:50:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.907 22:50:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:37.907 22:50:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:37.907 22:50:05 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:37.907 22:50:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:37.907 22:50:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.907 22:50:05 -- common/autotest_common.sh@10 -- # set +x 00:07:37.907 ************************************ 00:07:37.907 START TEST nvmf_filesystem_no_in_capsule 00:07:37.907 ************************************ 00:07:37.907 22:50:05 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:37.907 22:50:05 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:37.907 22:50:05 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:37.907 22:50:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:37.907 22:50:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:37.907 22:50:05 -- common/autotest_common.sh@10 -- # set +x 00:07:37.907 22:50:05 -- nvmf/common.sh@469 -- # nvmfpid=3914913 00:07:37.907 22:50:05 -- nvmf/common.sh@470 -- # waitforlisten 3914913 00:07:37.907 22:50:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:37.907 22:50:05 -- common/autotest_common.sh@819 -- # '[' -z 3914913 ']' 00:07:37.907 22:50:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.907 22:50:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:37.907 22:50:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.907 22:50:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:37.907 22:50:05 -- common/autotest_common.sh@10 -- # set +x 00:07:37.907 [2024-06-09 22:50:05.938488] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:37.907 [2024-06-09 22:50:05.938547] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.907 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.907 [2024-06-09 22:50:06.010834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.907 [2024-06-09 22:50:06.084852] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:37.907 [2024-06-09 22:50:06.084992] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.907 [2024-06-09 22:50:06.085003] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.907 [2024-06-09 22:50:06.085012] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.907 [2024-06-09 22:50:06.085151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.169 [2024-06-09 22:50:06.085256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.169 [2024-06-09 22:50:06.085434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.169 [2024-06-09 22:50:06.085437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.743 22:50:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:38.743 22:50:06 -- common/autotest_common.sh@852 -- # return 0 00:07:38.743 22:50:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:38.743 22:50:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:38.743 22:50:06 -- common/autotest_common.sh@10 -- # set +x 00:07:38.743 22:50:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.743 22:50:06 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:38.743 22:50:06 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:38.743 22:50:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.743 22:50:06 -- common/autotest_common.sh@10 -- # set +x 00:07:38.743 [2024-06-09 22:50:06.759577] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.743 22:50:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.743 22:50:06 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:38.743 22:50:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.743 22:50:06 -- common/autotest_common.sh@10 -- # set +x 00:07:38.743 Malloc1 00:07:38.743 22:50:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.743 22:50:06 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:38.743 22:50:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.743 22:50:06 -- common/autotest_common.sh@10 -- # set +x 00:07:38.743 22:50:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.743 22:50:06 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:38.743 22:50:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.743 22:50:06 -- common/autotest_common.sh@10 -- # set +x 00:07:38.743 22:50:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.743 22:50:06 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:38.743 22:50:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.743 22:50:06 -- common/autotest_common.sh@10 -- # set +x 00:07:38.743 [2024-06-09 22:50:06.888662] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.743 22:50:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.743 22:50:06 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:38.743 22:50:06 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:38.743 22:50:06 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:38.743 22:50:06 -- common/autotest_common.sh@1359 -- # local bs 00:07:38.743 22:50:06 -- common/autotest_common.sh@1360 -- # local nb 00:07:38.743 22:50:06 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:38.743 22:50:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:38.743 22:50:06 -- common/autotest_common.sh@10 -- # set +x 00:07:38.743 22:50:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.743 22:50:06 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:38.743 { 00:07:38.743 "name": "Malloc1", 00:07:38.743 "aliases": [ 00:07:38.743 "3490a077-bbe8-4f9c-b3f9-fb2175437e1f" 00:07:38.743 ], 00:07:38.743 "product_name": "Malloc disk", 00:07:38.743 "block_size": 512, 00:07:38.743 "num_blocks": 1048576, 00:07:38.743 "uuid": "3490a077-bbe8-4f9c-b3f9-fb2175437e1f", 00:07:38.743 "assigned_rate_limits": { 00:07:38.743 "rw_ios_per_sec": 0, 00:07:38.743 "rw_mbytes_per_sec": 0, 00:07:38.743 "r_mbytes_per_sec": 0, 00:07:38.743 "w_mbytes_per_sec": 0 00:07:38.743 }, 00:07:38.743 "claimed": true, 00:07:38.743 "claim_type": "exclusive_write", 00:07:38.743 "zoned": false, 00:07:38.743 "supported_io_types": { 00:07:38.743 "read": true, 00:07:38.743 "write": true, 00:07:38.743 "unmap": true, 00:07:38.743 "write_zeroes": true, 00:07:38.743 "flush": true, 00:07:38.743 "reset": true, 00:07:38.743 "compare": false, 00:07:38.743 "compare_and_write": false, 00:07:38.743 "abort": true, 00:07:38.743 "nvme_admin": false, 00:07:38.743 "nvme_io": false 00:07:38.743 }, 00:07:38.743 "memory_domains": [ 00:07:38.743 { 00:07:38.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.743 "dma_device_type": 2 00:07:38.743 } 00:07:38.743 ], 00:07:38.743 "driver_specific": {} 00:07:38.743 } 00:07:38.743 ]' 00:07:38.743 22:50:06 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:39.005 22:50:06 -- common/autotest_common.sh@1362 -- # bs=512 00:07:39.005 22:50:06 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:39.005 22:50:07 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:39.005 22:50:07 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:39.005 22:50:07 -- common/autotest_common.sh@1367 -- # echo 512 00:07:39.005 22:50:07 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:39.005 22:50:07 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:40.393 22:50:08 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:40.393 22:50:08 -- common/autotest_common.sh@1177 -- # local i=0 00:07:40.393 22:50:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:40.393 22:50:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:40.393 22:50:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:42.942 22:50:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:42.942 22:50:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:42.942 22:50:10 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:42.942 22:50:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:42.942 22:50:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:42.942 22:50:10 -- common/autotest_common.sh@1187 -- # return 0 00:07:42.942 22:50:10 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:42.942 22:50:10 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:42.942 22:50:10 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:42.942 22:50:10 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:42.942 22:50:10 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:42.942 22:50:10 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:42.943 22:50:10 -- setup/common.sh@80 -- # echo 536870912 00:07:42.943 22:50:10 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:42.943 22:50:10 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:42.943 22:50:10 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:42.943 22:50:10 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:42.943 22:50:10 -- target/filesystem.sh@69 -- # partprobe 00:07:42.943 22:50:11 -- target/filesystem.sh@70 -- # sleep 1 00:07:44.327 22:50:12 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:44.327 22:50:12 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:44.327 22:50:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:44.327 22:50:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.327 22:50:12 -- common/autotest_common.sh@10 -- # set +x 00:07:44.327 ************************************ 00:07:44.327 START TEST filesystem_ext4 00:07:44.327 ************************************ 00:07:44.327 22:50:12 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:44.327 22:50:12 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:44.327 22:50:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:44.327 22:50:12 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:44.327 22:50:12 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:44.327 22:50:12 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:44.327 22:50:12 -- common/autotest_common.sh@904 -- # local i=0 00:07:44.327 22:50:12 -- common/autotest_common.sh@905 -- # local force 00:07:44.327 22:50:12 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:44.327 22:50:12 -- common/autotest_common.sh@908 -- # force=-F 00:07:44.327 22:50:12 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:44.327 mke2fs 1.46.5 (30-Dec-2021) 00:07:44.327 Discarding device blocks: 0/522240 done 00:07:44.327 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:44.327 Filesystem UUID: 6ff3ec4f-b398-4446-bb2a-3d049391927d 00:07:44.327 Superblock backups stored on blocks: 00:07:44.327 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:44.327 00:07:44.327 Allocating group tables: 0/64 done 00:07:44.327 Writing inode tables: 0/64 done 00:07:44.900 Creating journal (8192 blocks): done 00:07:45.732 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:45.732 00:07:45.732 22:50:13 -- common/autotest_common.sh@921 -- # return 0 00:07:45.732 22:50:13 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:45.994 22:50:14 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:45.994 22:50:14 -- target/filesystem.sh@25 -- # sync 00:07:45.994 22:50:14 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:45.994 22:50:14 -- target/filesystem.sh@27 -- # sync 00:07:45.994 22:50:14 -- target/filesystem.sh@29 -- # i=0 00:07:45.994 22:50:14 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:45.994 22:50:14 -- target/filesystem.sh@37 -- # kill -0 3914913 00:07:45.994 22:50:14 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:45.994 22:50:14 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:45.994 22:50:14 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:45.994 22:50:14 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:45.994 00:07:45.994 real 0m2.013s 00:07:45.994 user 0m0.023s 00:07:45.994 sys 0m0.072s 00:07:45.994 22:50:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.994 22:50:14 -- common/autotest_common.sh@10 -- # set +x 00:07:45.994 ************************************ 00:07:45.994 END TEST filesystem_ext4 00:07:45.994 ************************************ 00:07:45.994 22:50:14 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:45.994 22:50:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:45.994 22:50:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.994 22:50:14 -- common/autotest_common.sh@10 -- # set +x 00:07:45.994 ************************************ 00:07:45.994 START TEST filesystem_btrfs 00:07:45.994 ************************************ 00:07:45.994 22:50:14 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:45.994 22:50:14 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:45.994 22:50:14 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:45.994 22:50:14 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:45.994 22:50:14 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:45.994 22:50:14 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:45.994 22:50:14 -- common/autotest_common.sh@904 -- # local i=0 00:07:45.994 22:50:14 -- common/autotest_common.sh@905 -- # local force 00:07:45.994 22:50:14 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:45.994 22:50:14 -- common/autotest_common.sh@910 -- # force=-f 00:07:45.994 22:50:14 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:46.567 btrfs-progs v6.6.2 00:07:46.567 See https://btrfs.readthedocs.io for more information. 00:07:46.567 00:07:46.567 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:46.567 NOTE: several default settings have changed in version 5.15, please make sure 00:07:46.567 this does not affect your deployments: 00:07:46.567 - DUP for metadata (-m dup) 00:07:46.567 - enabled no-holes (-O no-holes) 00:07:46.567 - enabled free-space-tree (-R free-space-tree) 00:07:46.567 00:07:46.567 Label: (null) 00:07:46.567 UUID: 07d7e285-8f2f-4ab9-acae-0dc10e9b0cea 00:07:46.567 Node size: 16384 00:07:46.567 Sector size: 4096 00:07:46.567 Filesystem size: 510.00MiB 00:07:46.567 Block group profiles: 00:07:46.567 Data: single 8.00MiB 00:07:46.567 Metadata: DUP 32.00MiB 00:07:46.567 System: DUP 8.00MiB 00:07:46.567 SSD detected: yes 00:07:46.567 Zoned device: no 00:07:46.567 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:46.567 Runtime features: free-space-tree 00:07:46.567 Checksum: crc32c 00:07:46.567 Number of devices: 1 00:07:46.567 Devices: 00:07:46.567 ID SIZE PATH 00:07:46.567 1 510.00MiB /dev/nvme0n1p1 00:07:46.567 00:07:46.567 22:50:14 -- common/autotest_common.sh@921 -- # return 0 00:07:46.567 22:50:14 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:46.567 22:50:14 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:46.567 22:50:14 -- target/filesystem.sh@25 -- # sync 00:07:46.829 22:50:14 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:46.829 22:50:14 -- target/filesystem.sh@27 -- # sync 00:07:46.829 22:50:14 -- target/filesystem.sh@29 -- # i=0 00:07:46.829 22:50:14 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:46.829 22:50:14 -- target/filesystem.sh@37 -- # kill -0 3914913 00:07:46.829 22:50:14 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:46.829 22:50:14 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:46.829 22:50:14 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:46.829 22:50:14 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:46.829 00:07:46.829 real 0m0.653s 00:07:46.829 user 0m0.020s 00:07:46.829 sys 0m0.137s 00:07:46.829 22:50:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.829 22:50:14 -- common/autotest_common.sh@10 -- # set +x 00:07:46.829 ************************************ 00:07:46.829 END TEST filesystem_btrfs 00:07:46.829 ************************************ 00:07:46.829 22:50:14 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:46.829 22:50:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:46.829 22:50:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.829 22:50:14 -- common/autotest_common.sh@10 -- # set +x 00:07:46.829 ************************************ 00:07:46.829 START TEST filesystem_xfs 00:07:46.829 ************************************ 00:07:46.829 22:50:14 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:46.829 22:50:14 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:46.829 22:50:14 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:46.829 22:50:14 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:46.829 22:50:14 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:46.829 22:50:14 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:46.829 22:50:14 -- common/autotest_common.sh@904 -- # local i=0 00:07:46.829 22:50:14 -- common/autotest_common.sh@905 -- # local force 00:07:46.829 22:50:14 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:46.829 22:50:14 -- common/autotest_common.sh@910 -- # force=-f 00:07:46.829 22:50:14 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:46.829 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:46.829 = sectsz=512 attr=2, projid32bit=1 00:07:46.829 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:46.829 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:46.829 data = bsize=4096 blocks=130560, imaxpct=25 00:07:46.829 = sunit=0 swidth=0 blks 00:07:46.829 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:46.829 log =internal log bsize=4096 blocks=16384, version=2 00:07:46.829 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:46.829 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:48.217 Discarding blocks...Done. 00:07:48.217 22:50:16 -- common/autotest_common.sh@921 -- # return 0 00:07:48.217 22:50:16 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:50.762 22:50:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:50.762 22:50:18 -- target/filesystem.sh@25 -- # sync 00:07:50.762 22:50:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:50.762 22:50:18 -- target/filesystem.sh@27 -- # sync 00:07:50.762 22:50:18 -- target/filesystem.sh@29 -- # i=0 00:07:50.762 22:50:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:50.762 22:50:18 -- target/filesystem.sh@37 -- # kill -0 3914913 00:07:50.762 22:50:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:50.762 22:50:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:50.762 22:50:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:50.762 22:50:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:50.762 00:07:50.762 real 0m3.771s 00:07:50.762 user 0m0.034s 00:07:50.762 sys 0m0.070s 00:07:50.762 22:50:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.762 22:50:18 -- common/autotest_common.sh@10 -- # set +x 00:07:50.762 ************************************ 00:07:50.762 END TEST filesystem_xfs 00:07:50.762 ************************************ 00:07:50.762 22:50:18 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:51.022 22:50:18 -- target/filesystem.sh@93 -- # sync 00:07:51.284 22:50:19 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:51.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.545 22:50:19 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:51.545 22:50:19 -- common/autotest_common.sh@1198 -- # local i=0 00:07:51.545 22:50:19 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:51.545 22:50:19 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.545 22:50:19 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:51.545 22:50:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:51.545 22:50:19 -- common/autotest_common.sh@1210 -- # return 0 00:07:51.545 22:50:19 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.545 22:50:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:51.545 22:50:19 -- common/autotest_common.sh@10 -- # set +x 00:07:51.545 22:50:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:51.545 22:50:19 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:51.545 22:50:19 -- target/filesystem.sh@101 -- # killprocess 3914913 00:07:51.545 22:50:19 -- common/autotest_common.sh@926 -- # '[' -z 3914913 ']' 00:07:51.545 22:50:19 -- common/autotest_common.sh@930 -- # kill -0 3914913 00:07:51.545 22:50:19 -- common/autotest_common.sh@931 -- # uname 00:07:51.545 22:50:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:51.545 22:50:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3914913 00:07:51.545 22:50:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:51.545 22:50:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:51.545 22:50:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3914913' 00:07:51.545 killing process with pid 3914913 00:07:51.545 22:50:19 -- common/autotest_common.sh@945 -- # kill 3914913 00:07:51.545 22:50:19 -- common/autotest_common.sh@950 -- # wait 3914913 00:07:51.806 22:50:19 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:51.806 00:07:51.806 real 0m13.941s 00:07:51.806 user 0m54.943s 00:07:51.806 sys 0m1.145s 00:07:51.806 22:50:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.806 22:50:19 -- common/autotest_common.sh@10 -- # set +x 00:07:51.806 ************************************ 00:07:51.806 END TEST nvmf_filesystem_no_in_capsule 00:07:51.806 ************************************ 00:07:51.806 22:50:19 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:51.806 22:50:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:51.806 22:50:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:51.806 22:50:19 -- common/autotest_common.sh@10 -- # set +x 00:07:51.806 ************************************ 00:07:51.806 START TEST nvmf_filesystem_in_capsule 00:07:51.806 ************************************ 00:07:51.806 22:50:19 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:07:51.806 22:50:19 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:51.806 22:50:19 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:51.806 22:50:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:51.806 22:50:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:51.806 22:50:19 -- common/autotest_common.sh@10 -- # set +x 00:07:51.806 22:50:19 -- nvmf/common.sh@469 -- # nvmfpid=3918001 00:07:51.806 22:50:19 -- nvmf/common.sh@470 -- # waitforlisten 3918001 00:07:51.806 22:50:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:51.806 22:50:19 -- common/autotest_common.sh@819 -- # '[' -z 3918001 ']' 00:07:51.806 22:50:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.806 22:50:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:51.806 22:50:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.806 22:50:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:51.806 22:50:19 -- common/autotest_common.sh@10 -- # set +x 00:07:51.806 [2024-06-09 22:50:19.923373] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:51.806 [2024-06-09 22:50:19.923453] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.806 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.067 [2024-06-09 22:50:19.987457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.067 [2024-06-09 22:50:20.053363] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:52.067 [2024-06-09 22:50:20.053498] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.067 [2024-06-09 22:50:20.053509] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.067 [2024-06-09 22:50:20.053517] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.067 [2024-06-09 22:50:20.053874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.067 [2024-06-09 22:50:20.053989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.067 [2024-06-09 22:50:20.054119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.067 [2024-06-09 22:50:20.054120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.639 22:50:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:52.639 22:50:20 -- common/autotest_common.sh@852 -- # return 0 00:07:52.639 22:50:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:52.639 22:50:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:52.639 22:50:20 -- common/autotest_common.sh@10 -- # set +x 00:07:52.639 22:50:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.639 22:50:20 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:52.639 22:50:20 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:52.639 22:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:52.639 22:50:20 -- common/autotest_common.sh@10 -- # set +x 00:07:52.639 [2024-06-09 22:50:20.740612] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.639 22:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:52.639 22:50:20 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:52.639 22:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:52.639 22:50:20 -- common/autotest_common.sh@10 -- # set +x 00:07:52.937 Malloc1 00:07:52.937 22:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:52.937 22:50:20 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:52.937 22:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:52.937 22:50:20 -- common/autotest_common.sh@10 -- # set +x 00:07:52.937 22:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:52.937 22:50:20 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:52.937 22:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:52.937 22:50:20 -- common/autotest_common.sh@10 -- # set +x 00:07:52.937 22:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:52.937 22:50:20 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.937 22:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:52.937 22:50:20 -- common/autotest_common.sh@10 -- # set +x 00:07:52.937 [2024-06-09 22:50:20.865429] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.937 22:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:52.937 22:50:20 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:52.937 22:50:20 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:52.937 22:50:20 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:52.937 22:50:20 -- common/autotest_common.sh@1359 -- # local bs 00:07:52.937 22:50:20 -- common/autotest_common.sh@1360 -- # local nb 00:07:52.937 22:50:20 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:52.937 22:50:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:52.937 22:50:20 -- common/autotest_common.sh@10 -- # set +x 00:07:52.937 22:50:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:52.937 22:50:20 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:52.937 { 00:07:52.937 "name": "Malloc1", 00:07:52.937 "aliases": [ 00:07:52.937 "1fe246a9-b230-4e1f-b9b6-9e6f1e748f23" 00:07:52.937 ], 00:07:52.937 "product_name": "Malloc disk", 00:07:52.937 "block_size": 512, 00:07:52.937 "num_blocks": 1048576, 00:07:52.937 "uuid": "1fe246a9-b230-4e1f-b9b6-9e6f1e748f23", 00:07:52.937 "assigned_rate_limits": { 00:07:52.937 "rw_ios_per_sec": 0, 00:07:52.937 "rw_mbytes_per_sec": 0, 00:07:52.937 "r_mbytes_per_sec": 0, 00:07:52.937 "w_mbytes_per_sec": 0 00:07:52.937 }, 00:07:52.937 "claimed": true, 00:07:52.937 "claim_type": "exclusive_write", 00:07:52.937 "zoned": false, 00:07:52.937 "supported_io_types": { 00:07:52.937 "read": true, 00:07:52.937 "write": true, 00:07:52.937 "unmap": true, 00:07:52.937 "write_zeroes": true, 00:07:52.937 "flush": true, 00:07:52.937 "reset": true, 00:07:52.937 "compare": false, 00:07:52.937 "compare_and_write": false, 00:07:52.937 "abort": true, 00:07:52.937 "nvme_admin": false, 00:07:52.937 "nvme_io": false 00:07:52.937 }, 00:07:52.937 "memory_domains": [ 00:07:52.937 { 00:07:52.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.937 "dma_device_type": 2 00:07:52.937 } 00:07:52.937 ], 00:07:52.937 "driver_specific": {} 00:07:52.937 } 00:07:52.937 ]' 00:07:52.937 22:50:20 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:52.937 22:50:20 -- common/autotest_common.sh@1362 -- # bs=512 00:07:52.937 22:50:20 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:52.937 22:50:20 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:52.937 22:50:20 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:52.937 22:50:20 -- common/autotest_common.sh@1367 -- # echo 512 00:07:52.937 22:50:20 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:52.937 22:50:20 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:54.871 22:50:22 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:54.871 22:50:22 -- common/autotest_common.sh@1177 -- # local i=0 00:07:54.871 22:50:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:54.871 22:50:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:54.871 22:50:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:56.787 22:50:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:56.787 22:50:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:56.787 22:50:24 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:56.787 22:50:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:56.787 22:50:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:56.787 22:50:24 -- common/autotest_common.sh@1187 -- # return 0 00:07:56.787 22:50:24 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:56.787 22:50:24 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:56.787 22:50:24 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:56.787 22:50:24 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:56.787 22:50:24 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:56.787 22:50:24 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:56.787 22:50:24 -- setup/common.sh@80 -- # echo 536870912 00:07:56.787 22:50:24 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:56.787 22:50:24 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:56.787 22:50:24 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:56.788 22:50:24 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:56.788 22:50:24 -- target/filesystem.sh@69 -- # partprobe 00:07:57.049 22:50:25 -- target/filesystem.sh@70 -- # sleep 1 00:07:57.993 22:50:26 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:57.994 22:50:26 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:57.994 22:50:26 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:57.994 22:50:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.994 22:50:26 -- common/autotest_common.sh@10 -- # set +x 00:07:57.994 ************************************ 00:07:57.994 START TEST filesystem_in_capsule_ext4 00:07:57.994 ************************************ 00:07:57.994 22:50:26 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:57.994 22:50:26 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:57.994 22:50:26 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.994 22:50:26 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:57.994 22:50:26 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:57.994 22:50:26 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:57.994 22:50:26 -- common/autotest_common.sh@904 -- # local i=0 00:07:57.994 22:50:26 -- common/autotest_common.sh@905 -- # local force 00:07:57.994 22:50:26 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:57.994 22:50:26 -- common/autotest_common.sh@908 -- # force=-F 00:07:57.994 22:50:26 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:57.994 mke2fs 1.46.5 (30-Dec-2021) 00:07:57.994 Discarding device blocks: 0/522240 done 00:07:58.254 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:58.254 Filesystem UUID: 3a38097f-a9f2-4105-8a8a-524603948275 00:07:58.254 Superblock backups stored on blocks: 00:07:58.254 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:58.254 00:07:58.254 Allocating group tables: 0/64 done 00:07:58.255 Writing inode tables: 0/64 done 00:07:58.255 Creating journal (8192 blocks): done 00:07:59.197 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:07:59.197 00:07:59.197 22:50:27 -- common/autotest_common.sh@921 -- # return 0 00:07:59.197 22:50:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.770 22:50:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.770 22:50:27 -- target/filesystem.sh@25 -- # sync 00:07:59.770 22:50:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.770 22:50:27 -- target/filesystem.sh@27 -- # sync 00:07:59.770 22:50:27 -- target/filesystem.sh@29 -- # i=0 00:07:59.770 22:50:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.770 22:50:27 -- target/filesystem.sh@37 -- # kill -0 3918001 00:07:59.770 22:50:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.770 22:50:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.770 22:50:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.770 22:50:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.770 00:07:59.770 real 0m1.862s 00:07:59.770 user 0m0.026s 00:07:59.770 sys 0m0.070s 00:07:59.770 22:50:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.770 22:50:27 -- common/autotest_common.sh@10 -- # set +x 00:07:59.770 ************************************ 00:07:59.770 END TEST filesystem_in_capsule_ext4 00:07:59.770 ************************************ 00:08:00.032 22:50:27 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:00.032 22:50:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:00.032 22:50:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.032 22:50:27 -- common/autotest_common.sh@10 -- # set +x 00:08:00.032 ************************************ 00:08:00.032 START TEST filesystem_in_capsule_btrfs 00:08:00.032 ************************************ 00:08:00.032 22:50:27 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:00.032 22:50:27 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:00.032 22:50:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:00.032 22:50:27 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:00.032 22:50:27 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:00.032 22:50:27 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:00.032 22:50:27 -- common/autotest_common.sh@904 -- # local i=0 00:08:00.032 22:50:27 -- common/autotest_common.sh@905 -- # local force 00:08:00.032 22:50:27 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:00.032 22:50:27 -- common/autotest_common.sh@910 -- # force=-f 00:08:00.032 22:50:27 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:00.032 btrfs-progs v6.6.2 00:08:00.032 See https://btrfs.readthedocs.io for more information. 00:08:00.032 00:08:00.032 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:00.032 NOTE: several default settings have changed in version 5.15, please make sure 00:08:00.032 this does not affect your deployments: 00:08:00.032 - DUP for metadata (-m dup) 00:08:00.032 - enabled no-holes (-O no-holes) 00:08:00.032 - enabled free-space-tree (-R free-space-tree) 00:08:00.032 00:08:00.032 Label: (null) 00:08:00.032 UUID: 4915da5c-52b2-490f-a7f5-c57fd62dabd4 00:08:00.032 Node size: 16384 00:08:00.032 Sector size: 4096 00:08:00.032 Filesystem size: 510.00MiB 00:08:00.032 Block group profiles: 00:08:00.032 Data: single 8.00MiB 00:08:00.032 Metadata: DUP 32.00MiB 00:08:00.032 System: DUP 8.00MiB 00:08:00.032 SSD detected: yes 00:08:00.032 Zoned device: no 00:08:00.032 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:00.032 Runtime features: free-space-tree 00:08:00.032 Checksum: crc32c 00:08:00.032 Number of devices: 1 00:08:00.032 Devices: 00:08:00.032 ID SIZE PATH 00:08:00.032 1 510.00MiB /dev/nvme0n1p1 00:08:00.032 00:08:00.032 22:50:28 -- common/autotest_common.sh@921 -- # return 0 00:08:00.032 22:50:28 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:00.605 22:50:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:00.605 22:50:28 -- target/filesystem.sh@25 -- # sync 00:08:00.605 22:50:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:00.605 22:50:28 -- target/filesystem.sh@27 -- # sync 00:08:00.605 22:50:28 -- target/filesystem.sh@29 -- # i=0 00:08:00.605 22:50:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:00.605 22:50:28 -- target/filesystem.sh@37 -- # kill -0 3918001 00:08:00.605 22:50:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:00.605 22:50:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:00.606 22:50:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:00.606 22:50:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:00.606 00:08:00.606 real 0m0.674s 00:08:00.606 user 0m0.027s 00:08:00.606 sys 0m0.131s 00:08:00.606 22:50:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.606 22:50:28 -- common/autotest_common.sh@10 -- # set +x 00:08:00.606 ************************************ 00:08:00.606 END TEST filesystem_in_capsule_btrfs 00:08:00.606 ************************************ 00:08:00.606 22:50:28 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:00.606 22:50:28 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:00.606 22:50:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.606 22:50:28 -- common/autotest_common.sh@10 -- # set +x 00:08:00.606 ************************************ 00:08:00.606 START TEST filesystem_in_capsule_xfs 00:08:00.606 ************************************ 00:08:00.606 22:50:28 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:00.606 22:50:28 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:00.606 22:50:28 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:00.606 22:50:28 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:00.606 22:50:28 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:00.606 22:50:28 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:00.606 22:50:28 -- common/autotest_common.sh@904 -- # local i=0 00:08:00.606 22:50:28 -- common/autotest_common.sh@905 -- # local force 00:08:00.606 22:50:28 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:00.606 22:50:28 -- common/autotest_common.sh@910 -- # force=-f 00:08:00.606 22:50:28 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:00.606 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:00.606 = sectsz=512 attr=2, projid32bit=1 00:08:00.606 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:00.606 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:00.606 data = bsize=4096 blocks=130560, imaxpct=25 00:08:00.606 = sunit=0 swidth=0 blks 00:08:00.606 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:00.606 log =internal log bsize=4096 blocks=16384, version=2 00:08:00.606 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:00.606 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:01.550 Discarding blocks...Done. 00:08:01.550 22:50:29 -- common/autotest_common.sh@921 -- # return 0 00:08:01.550 22:50:29 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.101 22:50:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.101 22:50:32 -- target/filesystem.sh@25 -- # sync 00:08:04.101 22:50:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.101 22:50:32 -- target/filesystem.sh@27 -- # sync 00:08:04.101 22:50:32 -- target/filesystem.sh@29 -- # i=0 00:08:04.101 22:50:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.101 22:50:32 -- target/filesystem.sh@37 -- # kill -0 3918001 00:08:04.101 22:50:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.101 22:50:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.101 22:50:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.101 22:50:32 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.101 00:08:04.101 real 0m3.436s 00:08:04.101 user 0m0.026s 00:08:04.101 sys 0m0.076s 00:08:04.101 22:50:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.101 22:50:32 -- common/autotest_common.sh@10 -- # set +x 00:08:04.101 ************************************ 00:08:04.101 END TEST filesystem_in_capsule_xfs 00:08:04.101 ************************************ 00:08:04.101 22:50:32 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:04.360 22:50:32 -- target/filesystem.sh@93 -- # sync 00:08:04.361 22:50:32 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:04.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.622 22:50:32 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:04.622 22:50:32 -- common/autotest_common.sh@1198 -- # local i=0 00:08:04.622 22:50:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:04.622 22:50:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:04.622 22:50:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:04.622 22:50:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:04.622 22:50:32 -- common/autotest_common.sh@1210 -- # return 0 00:08:04.622 22:50:32 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:04.622 22:50:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.622 22:50:32 -- common/autotest_common.sh@10 -- # set +x 00:08:04.622 22:50:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.622 22:50:32 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:04.622 22:50:32 -- target/filesystem.sh@101 -- # killprocess 3918001 00:08:04.622 22:50:32 -- common/autotest_common.sh@926 -- # '[' -z 3918001 ']' 00:08:04.622 22:50:32 -- common/autotest_common.sh@930 -- # kill -0 3918001 00:08:04.622 22:50:32 -- common/autotest_common.sh@931 -- # uname 00:08:04.622 22:50:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:04.622 22:50:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3918001 00:08:04.622 22:50:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:04.622 22:50:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:04.622 22:50:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3918001' 00:08:04.622 killing process with pid 3918001 00:08:04.622 22:50:32 -- common/autotest_common.sh@945 -- # kill 3918001 00:08:04.622 22:50:32 -- common/autotest_common.sh@950 -- # wait 3918001 00:08:04.884 22:50:32 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:04.884 00:08:04.884 real 0m13.069s 00:08:04.884 user 0m51.505s 00:08:04.884 sys 0m1.172s 00:08:04.884 22:50:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.884 22:50:32 -- common/autotest_common.sh@10 -- # set +x 00:08:04.884 ************************************ 00:08:04.884 END TEST nvmf_filesystem_in_capsule 00:08:04.884 ************************************ 00:08:04.884 22:50:32 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:04.884 22:50:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:04.884 22:50:32 -- nvmf/common.sh@116 -- # sync 00:08:04.884 22:50:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:04.884 22:50:32 -- nvmf/common.sh@119 -- # set +e 00:08:04.884 22:50:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:04.884 22:50:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:04.884 rmmod nvme_tcp 00:08:04.884 rmmod nvme_fabrics 00:08:04.884 rmmod nvme_keyring 00:08:04.884 22:50:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:04.884 22:50:33 -- nvmf/common.sh@123 -- # set -e 00:08:04.884 22:50:33 -- nvmf/common.sh@124 -- # return 0 00:08:04.884 22:50:33 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:04.884 22:50:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:04.884 22:50:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:04.884 22:50:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:04.884 22:50:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:04.884 22:50:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:04.884 22:50:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.884 22:50:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.884 22:50:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.429 22:50:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:07.430 00:08:07.430 real 0m36.500s 00:08:07.430 user 1m48.586s 00:08:07.430 sys 0m7.602s 00:08:07.430 22:50:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.430 22:50:35 -- common/autotest_common.sh@10 -- # set +x 00:08:07.430 ************************************ 00:08:07.430 END TEST nvmf_filesystem 00:08:07.430 ************************************ 00:08:07.430 22:50:35 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:07.430 22:50:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:07.430 22:50:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.430 22:50:35 -- common/autotest_common.sh@10 -- # set +x 00:08:07.430 ************************************ 00:08:07.430 START TEST nvmf_discovery 00:08:07.430 ************************************ 00:08:07.430 22:50:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:07.430 * Looking for test storage... 00:08:07.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.430 22:50:35 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.430 22:50:35 -- nvmf/common.sh@7 -- # uname -s 00:08:07.430 22:50:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.430 22:50:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.430 22:50:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.430 22:50:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.430 22:50:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.430 22:50:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.430 22:50:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.430 22:50:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.430 22:50:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.430 22:50:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.430 22:50:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:07.430 22:50:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:07.430 22:50:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.430 22:50:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.430 22:50:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.430 22:50:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.430 22:50:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.430 22:50:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.430 22:50:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.430 22:50:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.430 22:50:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.430 22:50:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.430 22:50:35 -- paths/export.sh@5 -- # export PATH 00:08:07.430 22:50:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.430 22:50:35 -- nvmf/common.sh@46 -- # : 0 00:08:07.430 22:50:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:07.430 22:50:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:07.430 22:50:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:07.430 22:50:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.430 22:50:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.430 22:50:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:07.430 22:50:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:07.430 22:50:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:07.430 22:50:35 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:07.430 22:50:35 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:07.430 22:50:35 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:07.430 22:50:35 -- target/discovery.sh@15 -- # hash nvme 00:08:07.430 22:50:35 -- target/discovery.sh@20 -- # nvmftestinit 00:08:07.430 22:50:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:07.430 22:50:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.430 22:50:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:07.430 22:50:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:07.430 22:50:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:07.430 22:50:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.430 22:50:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.430 22:50:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.430 22:50:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:07.430 22:50:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:07.430 22:50:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:07.430 22:50:35 -- common/autotest_common.sh@10 -- # set +x 00:08:14.019 22:50:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:14.019 22:50:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:14.019 22:50:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:14.019 22:50:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:14.019 22:50:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:14.019 22:50:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:14.019 22:50:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:14.019 22:50:41 -- nvmf/common.sh@294 -- # net_devs=() 00:08:14.019 22:50:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:14.019 22:50:41 -- nvmf/common.sh@295 -- # e810=() 00:08:14.019 22:50:41 -- nvmf/common.sh@295 -- # local -ga e810 00:08:14.019 22:50:41 -- nvmf/common.sh@296 -- # x722=() 00:08:14.019 22:50:41 -- nvmf/common.sh@296 -- # local -ga x722 00:08:14.019 22:50:41 -- nvmf/common.sh@297 -- # mlx=() 00:08:14.019 22:50:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:14.019 22:50:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.019 22:50:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.019 22:50:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.019 22:50:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.019 22:50:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.019 22:50:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.019 22:50:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.019 22:50:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.019 22:50:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.019 22:50:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.019 22:50:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.019 22:50:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:14.019 22:50:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:14.019 22:50:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:14.019 22:50:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:14.019 22:50:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:14.019 22:50:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:14.019 22:50:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:14.019 22:50:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:14.019 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:14.019 22:50:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:14.019 22:50:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:14.019 22:50:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.019 22:50:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.019 22:50:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:14.019 22:50:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:14.019 22:50:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:14.019 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:14.019 22:50:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:14.020 22:50:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:14.020 22:50:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.020 22:50:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.020 22:50:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:14.020 22:50:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:14.020 22:50:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:14.020 22:50:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:14.020 22:50:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:14.020 22:50:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.020 22:50:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:14.020 22:50:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.020 22:50:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:14.020 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:14.020 22:50:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.020 22:50:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:14.020 22:50:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.020 22:50:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:14.020 22:50:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.020 22:50:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:14.020 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:14.020 22:50:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.020 22:50:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:14.020 22:50:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:14.020 22:50:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:14.020 22:50:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:14.020 22:50:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:14.020 22:50:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.020 22:50:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.020 22:50:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.020 22:50:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:14.020 22:50:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.020 22:50:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.020 22:50:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:14.020 22:50:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.020 22:50:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.020 22:50:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:14.020 22:50:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:14.020 22:50:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.020 22:50:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.020 22:50:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.020 22:50:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.020 22:50:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:14.020 22:50:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.020 22:50:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.020 22:50:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.020 22:50:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:14.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:08:14.020 00:08:14.020 --- 10.0.0.2 ping statistics --- 00:08:14.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.020 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:08:14.020 22:50:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.438 ms 00:08:14.020 00:08:14.020 --- 10.0.0.1 ping statistics --- 00:08:14.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.020 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:08:14.020 22:50:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.020 22:50:42 -- nvmf/common.sh@410 -- # return 0 00:08:14.020 22:50:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:14.020 22:50:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.020 22:50:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:14.020 22:50:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:14.020 22:50:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.020 22:50:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:14.020 22:50:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:14.020 22:50:42 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:14.020 22:50:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:14.020 22:50:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:14.020 22:50:42 -- common/autotest_common.sh@10 -- # set +x 00:08:14.020 22:50:42 -- nvmf/common.sh@469 -- # nvmfpid=3924871 00:08:14.020 22:50:42 -- nvmf/common.sh@470 -- # waitforlisten 3924871 00:08:14.020 22:50:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:14.020 22:50:42 -- common/autotest_common.sh@819 -- # '[' -z 3924871 ']' 00:08:14.020 22:50:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.020 22:50:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:14.020 22:50:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.020 22:50:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:14.020 22:50:42 -- common/autotest_common.sh@10 -- # set +x 00:08:14.281 [2024-06-09 22:50:42.214214] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:14.281 [2024-06-09 22:50:42.214280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.281 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.281 [2024-06-09 22:50:42.286941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.281 [2024-06-09 22:50:42.362200] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.281 [2024-06-09 22:50:42.362344] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.281 [2024-06-09 22:50:42.362354] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.281 [2024-06-09 22:50:42.362362] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.281 [2024-06-09 22:50:42.362436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.281 [2024-06-09 22:50:42.362563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.281 [2024-06-09 22:50:42.362721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.281 [2024-06-09 22:50:42.362722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.853 22:50:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:14.853 22:50:42 -- common/autotest_common.sh@852 -- # return 0 00:08:14.853 22:50:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:14.853 22:50:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:14.853 22:50:42 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 22:50:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.116 22:50:43 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 [2024-06-09 22:50:43.038579] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@26 -- # seq 1 4 00:08:15.116 22:50:43 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:15.116 22:50:43 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 Null1 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 [2024-06-09 22:50:43.098958] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:15.116 22:50:43 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 Null2 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:15.116 22:50:43 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 Null3 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:15.116 22:50:43 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 Null4 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:15.116 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.116 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.116 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.116 22:50:43 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:08:15.378 00:08:15.378 Discovery Log Number of Records 6, Generation counter 6 00:08:15.378 =====Discovery Log Entry 0====== 00:08:15.378 trtype: tcp 00:08:15.378 adrfam: ipv4 00:08:15.378 subtype: current discovery subsystem 00:08:15.378 treq: not required 00:08:15.378 portid: 0 00:08:15.378 trsvcid: 4420 00:08:15.378 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:15.378 traddr: 10.0.0.2 00:08:15.378 eflags: explicit discovery connections, duplicate discovery information 00:08:15.378 sectype: none 00:08:15.378 =====Discovery Log Entry 1====== 00:08:15.378 trtype: tcp 00:08:15.378 adrfam: ipv4 00:08:15.378 subtype: nvme subsystem 00:08:15.378 treq: not required 00:08:15.378 portid: 0 00:08:15.378 trsvcid: 4420 00:08:15.378 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:15.378 traddr: 10.0.0.2 00:08:15.378 eflags: none 00:08:15.378 sectype: none 00:08:15.378 =====Discovery Log Entry 2====== 00:08:15.378 trtype: tcp 00:08:15.378 adrfam: ipv4 00:08:15.378 subtype: nvme subsystem 00:08:15.378 treq: not required 00:08:15.378 portid: 0 00:08:15.378 trsvcid: 4420 00:08:15.378 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:15.378 traddr: 10.0.0.2 00:08:15.378 eflags: none 00:08:15.378 sectype: none 00:08:15.378 =====Discovery Log Entry 3====== 00:08:15.378 trtype: tcp 00:08:15.378 adrfam: ipv4 00:08:15.378 subtype: nvme subsystem 00:08:15.378 treq: not required 00:08:15.378 portid: 0 00:08:15.378 trsvcid: 4420 00:08:15.378 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:15.378 traddr: 10.0.0.2 00:08:15.378 eflags: none 00:08:15.378 sectype: none 00:08:15.378 =====Discovery Log Entry 4====== 00:08:15.378 trtype: tcp 00:08:15.378 adrfam: ipv4 00:08:15.378 subtype: nvme subsystem 00:08:15.378 treq: not required 00:08:15.378 portid: 0 00:08:15.378 trsvcid: 4420 00:08:15.378 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:15.378 traddr: 10.0.0.2 00:08:15.378 eflags: none 00:08:15.378 sectype: none 00:08:15.378 =====Discovery Log Entry 5====== 00:08:15.378 trtype: tcp 00:08:15.378 adrfam: ipv4 00:08:15.378 subtype: discovery subsystem referral 00:08:15.378 treq: not required 00:08:15.378 portid: 0 00:08:15.378 trsvcid: 4430 00:08:15.378 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:15.378 traddr: 10.0.0.2 00:08:15.378 eflags: none 00:08:15.378 sectype: none 00:08:15.378 22:50:43 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:15.378 Perform nvmf subsystem discovery via RPC 00:08:15.378 22:50:43 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:15.378 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.378 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.378 [2024-06-09 22:50:43.443958] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:15.378 [ 00:08:15.378 { 00:08:15.378 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:15.378 "subtype": "Discovery", 00:08:15.378 "listen_addresses": [ 00:08:15.378 { 00:08:15.378 "transport": "TCP", 00:08:15.378 "trtype": "TCP", 00:08:15.378 "adrfam": "IPv4", 00:08:15.378 "traddr": "10.0.0.2", 00:08:15.378 "trsvcid": "4420" 00:08:15.378 } 00:08:15.378 ], 00:08:15.378 "allow_any_host": true, 00:08:15.378 "hosts": [] 00:08:15.378 }, 00:08:15.378 { 00:08:15.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:15.378 "subtype": "NVMe", 00:08:15.378 "listen_addresses": [ 00:08:15.378 { 00:08:15.378 "transport": "TCP", 00:08:15.378 "trtype": "TCP", 00:08:15.378 "adrfam": "IPv4", 00:08:15.378 "traddr": "10.0.0.2", 00:08:15.378 "trsvcid": "4420" 00:08:15.378 } 00:08:15.378 ], 00:08:15.378 "allow_any_host": true, 00:08:15.378 "hosts": [], 00:08:15.378 "serial_number": "SPDK00000000000001", 00:08:15.378 "model_number": "SPDK bdev Controller", 00:08:15.378 "max_namespaces": 32, 00:08:15.378 "min_cntlid": 1, 00:08:15.378 "max_cntlid": 65519, 00:08:15.378 "namespaces": [ 00:08:15.378 { 00:08:15.378 "nsid": 1, 00:08:15.378 "bdev_name": "Null1", 00:08:15.378 "name": "Null1", 00:08:15.378 "nguid": "AD60E87D16964D6FA8968BD07254B742", 00:08:15.378 "uuid": "ad60e87d-1696-4d6f-a896-8bd07254b742" 00:08:15.378 } 00:08:15.378 ] 00:08:15.378 }, 00:08:15.378 { 00:08:15.378 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:15.378 "subtype": "NVMe", 00:08:15.378 "listen_addresses": [ 00:08:15.378 { 00:08:15.378 "transport": "TCP", 00:08:15.378 "trtype": "TCP", 00:08:15.378 "adrfam": "IPv4", 00:08:15.378 "traddr": "10.0.0.2", 00:08:15.378 "trsvcid": "4420" 00:08:15.378 } 00:08:15.378 ], 00:08:15.378 "allow_any_host": true, 00:08:15.378 "hosts": [], 00:08:15.378 "serial_number": "SPDK00000000000002", 00:08:15.378 "model_number": "SPDK bdev Controller", 00:08:15.378 "max_namespaces": 32, 00:08:15.379 "min_cntlid": 1, 00:08:15.379 "max_cntlid": 65519, 00:08:15.379 "namespaces": [ 00:08:15.379 { 00:08:15.379 "nsid": 1, 00:08:15.379 "bdev_name": "Null2", 00:08:15.379 "name": "Null2", 00:08:15.379 "nguid": "0F9C58D986ED4AF88455E50201DDC490", 00:08:15.379 "uuid": "0f9c58d9-86ed-4af8-8455-e50201ddc490" 00:08:15.379 } 00:08:15.379 ] 00:08:15.379 }, 00:08:15.379 { 00:08:15.379 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:15.379 "subtype": "NVMe", 00:08:15.379 "listen_addresses": [ 00:08:15.379 { 00:08:15.379 "transport": "TCP", 00:08:15.379 "trtype": "TCP", 00:08:15.379 "adrfam": "IPv4", 00:08:15.379 "traddr": "10.0.0.2", 00:08:15.379 "trsvcid": "4420" 00:08:15.379 } 00:08:15.379 ], 00:08:15.379 "allow_any_host": true, 00:08:15.379 "hosts": [], 00:08:15.379 "serial_number": "SPDK00000000000003", 00:08:15.379 "model_number": "SPDK bdev Controller", 00:08:15.379 "max_namespaces": 32, 00:08:15.379 "min_cntlid": 1, 00:08:15.379 "max_cntlid": 65519, 00:08:15.379 "namespaces": [ 00:08:15.379 { 00:08:15.379 "nsid": 1, 00:08:15.379 "bdev_name": "Null3", 00:08:15.379 "name": "Null3", 00:08:15.379 "nguid": "40E43171450B42F5BF37671FB69BFAF4", 00:08:15.379 "uuid": "40e43171-450b-42f5-bf37-671fb69bfaf4" 00:08:15.379 } 00:08:15.379 ] 00:08:15.379 }, 00:08:15.379 { 00:08:15.379 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:15.379 "subtype": "NVMe", 00:08:15.379 "listen_addresses": [ 00:08:15.379 { 00:08:15.379 "transport": "TCP", 00:08:15.379 "trtype": "TCP", 00:08:15.379 "adrfam": "IPv4", 00:08:15.379 "traddr": "10.0.0.2", 00:08:15.379 "trsvcid": "4420" 00:08:15.379 } 00:08:15.379 ], 00:08:15.379 "allow_any_host": true, 00:08:15.379 "hosts": [], 00:08:15.379 "serial_number": "SPDK00000000000004", 00:08:15.379 "model_number": "SPDK bdev Controller", 00:08:15.379 "max_namespaces": 32, 00:08:15.379 "min_cntlid": 1, 00:08:15.379 "max_cntlid": 65519, 00:08:15.379 "namespaces": [ 00:08:15.379 { 00:08:15.379 "nsid": 1, 00:08:15.379 "bdev_name": "Null4", 00:08:15.379 "name": "Null4", 00:08:15.379 "nguid": "236A6611A01C4BC099AF1472E01A3E9D", 00:08:15.379 "uuid": "236a6611-a01c-4bc0-99af-1472e01a3e9d" 00:08:15.379 } 00:08:15.379 ] 00:08:15.379 } 00:08:15.379 ] 00:08:15.379 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.379 22:50:43 -- target/discovery.sh@42 -- # seq 1 4 00:08:15.379 22:50:43 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.379 22:50:43 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.379 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.379 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.379 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.379 22:50:43 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:15.379 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.379 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.379 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.379 22:50:43 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.379 22:50:43 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:15.379 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.379 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.379 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.379 22:50:43 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:15.379 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.379 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.379 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.379 22:50:43 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.379 22:50:43 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:15.379 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.379 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.379 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.379 22:50:43 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:15.379 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.379 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.379 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.379 22:50:43 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:15.379 22:50:43 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:15.379 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.379 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.379 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.379 22:50:43 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:15.379 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.379 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.641 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.641 22:50:43 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:15.641 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.641 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.641 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.641 22:50:43 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:15.641 22:50:43 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:15.641 22:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.641 22:50:43 -- common/autotest_common.sh@10 -- # set +x 00:08:15.641 22:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.641 22:50:43 -- target/discovery.sh@49 -- # check_bdevs= 00:08:15.641 22:50:43 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:15.641 22:50:43 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:15.641 22:50:43 -- target/discovery.sh@57 -- # nvmftestfini 00:08:15.641 22:50:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:15.641 22:50:43 -- nvmf/common.sh@116 -- # sync 00:08:15.641 22:50:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:15.641 22:50:43 -- nvmf/common.sh@119 -- # set +e 00:08:15.641 22:50:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:15.641 22:50:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:15.641 rmmod nvme_tcp 00:08:15.641 rmmod nvme_fabrics 00:08:15.641 rmmod nvme_keyring 00:08:15.641 22:50:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:15.641 22:50:43 -- nvmf/common.sh@123 -- # set -e 00:08:15.641 22:50:43 -- nvmf/common.sh@124 -- # return 0 00:08:15.641 22:50:43 -- nvmf/common.sh@477 -- # '[' -n 3924871 ']' 00:08:15.641 22:50:43 -- nvmf/common.sh@478 -- # killprocess 3924871 00:08:15.641 22:50:43 -- common/autotest_common.sh@926 -- # '[' -z 3924871 ']' 00:08:15.641 22:50:43 -- common/autotest_common.sh@930 -- # kill -0 3924871 00:08:15.641 22:50:43 -- common/autotest_common.sh@931 -- # uname 00:08:15.641 22:50:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:15.641 22:50:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3924871 00:08:15.641 22:50:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:15.641 22:50:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:15.641 22:50:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3924871' 00:08:15.641 killing process with pid 3924871 00:08:15.641 22:50:43 -- common/autotest_common.sh@945 -- # kill 3924871 00:08:15.641 [2024-06-09 22:50:43.743701] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:15.641 22:50:43 -- common/autotest_common.sh@950 -- # wait 3924871 00:08:15.903 22:50:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:15.903 22:50:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:15.903 22:50:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:15.903 22:50:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.903 22:50:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:15.903 22:50:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.903 22:50:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.903 22:50:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.833 22:50:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:17.833 00:08:17.833 real 0m10.771s 00:08:17.833 user 0m8.259s 00:08:17.833 sys 0m5.338s 00:08:17.833 22:50:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.833 22:50:45 -- common/autotest_common.sh@10 -- # set +x 00:08:17.833 ************************************ 00:08:17.833 END TEST nvmf_discovery 00:08:17.833 ************************************ 00:08:17.833 22:50:45 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:17.833 22:50:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:17.833 22:50:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.833 22:50:45 -- common/autotest_common.sh@10 -- # set +x 00:08:17.833 ************************************ 00:08:17.833 START TEST nvmf_referrals 00:08:17.833 ************************************ 00:08:17.833 22:50:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:18.126 * Looking for test storage... 00:08:18.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.126 22:50:46 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.126 22:50:46 -- nvmf/common.sh@7 -- # uname -s 00:08:18.126 22:50:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.126 22:50:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.126 22:50:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.126 22:50:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.126 22:50:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.126 22:50:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.126 22:50:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.126 22:50:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.126 22:50:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.126 22:50:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.126 22:50:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:18.126 22:50:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:18.126 22:50:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.126 22:50:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.126 22:50:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.126 22:50:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.126 22:50:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.126 22:50:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.126 22:50:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.126 22:50:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.126 22:50:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.127 22:50:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.127 22:50:46 -- paths/export.sh@5 -- # export PATH 00:08:18.127 22:50:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.127 22:50:46 -- nvmf/common.sh@46 -- # : 0 00:08:18.127 22:50:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:18.127 22:50:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:18.127 22:50:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:18.127 22:50:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.127 22:50:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.127 22:50:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:18.127 22:50:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:18.127 22:50:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:18.127 22:50:46 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:18.127 22:50:46 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:18.127 22:50:46 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:18.127 22:50:46 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:18.127 22:50:46 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:18.127 22:50:46 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:18.127 22:50:46 -- target/referrals.sh@37 -- # nvmftestinit 00:08:18.127 22:50:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:18.127 22:50:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.127 22:50:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:18.127 22:50:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:18.127 22:50:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:18.127 22:50:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.127 22:50:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.127 22:50:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.127 22:50:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:18.127 22:50:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:18.127 22:50:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:18.127 22:50:46 -- common/autotest_common.sh@10 -- # set +x 00:08:24.719 22:50:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:24.719 22:50:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:24.719 22:50:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:24.719 22:50:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:24.719 22:50:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:24.719 22:50:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:24.719 22:50:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:24.719 22:50:51 -- nvmf/common.sh@294 -- # net_devs=() 00:08:24.719 22:50:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:24.719 22:50:51 -- nvmf/common.sh@295 -- # e810=() 00:08:24.719 22:50:51 -- nvmf/common.sh@295 -- # local -ga e810 00:08:24.719 22:50:51 -- nvmf/common.sh@296 -- # x722=() 00:08:24.719 22:50:51 -- nvmf/common.sh@296 -- # local -ga x722 00:08:24.719 22:50:51 -- nvmf/common.sh@297 -- # mlx=() 00:08:24.719 22:50:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:24.719 22:50:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.719 22:50:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.719 22:50:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.719 22:50:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.719 22:50:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.719 22:50:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.719 22:50:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.719 22:50:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.719 22:50:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.719 22:50:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.719 22:50:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.719 22:50:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:24.719 22:50:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:24.719 22:50:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:24.719 22:50:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:24.719 22:50:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:24.719 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:24.719 22:50:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:24.719 22:50:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:24.719 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:24.719 22:50:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:24.719 22:50:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:24.719 22:50:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.719 22:50:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:24.719 22:50:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.719 22:50:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:24.719 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:24.719 22:50:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.719 22:50:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:24.719 22:50:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.719 22:50:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:24.719 22:50:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.719 22:50:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:24.719 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:24.719 22:50:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.719 22:50:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:24.719 22:50:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:24.719 22:50:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:24.719 22:50:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:24.719 22:50:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.719 22:50:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.719 22:50:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.719 22:50:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:24.719 22:50:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.719 22:50:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.719 22:50:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:24.719 22:50:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.719 22:50:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.719 22:50:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:24.719 22:50:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:24.719 22:50:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.719 22:50:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.719 22:50:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.719 22:50:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.719 22:50:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:24.719 22:50:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.719 22:50:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.719 22:50:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.719 22:50:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:24.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.707 ms 00:08:24.719 00:08:24.719 --- 10.0.0.2 ping statistics --- 00:08:24.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.719 rtt min/avg/max/mdev = 0.707/0.707/0.707/0.000 ms 00:08:24.719 22:50:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:08:24.719 00:08:24.719 --- 10.0.0.1 ping statistics --- 00:08:24.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.719 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:08:24.719 22:50:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.719 22:50:52 -- nvmf/common.sh@410 -- # return 0 00:08:24.719 22:50:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:24.719 22:50:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.719 22:50:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:24.719 22:50:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:24.719 22:50:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.719 22:50:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:24.719 22:50:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:24.719 22:50:52 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:24.719 22:50:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:24.719 22:50:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:24.719 22:50:52 -- common/autotest_common.sh@10 -- # set +x 00:08:24.719 22:50:52 -- nvmf/common.sh@469 -- # nvmfpid=3929147 00:08:24.719 22:50:52 -- nvmf/common.sh@470 -- # waitforlisten 3929147 00:08:24.719 22:50:52 -- common/autotest_common.sh@819 -- # '[' -z 3929147 ']' 00:08:24.719 22:50:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.719 22:50:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:24.719 22:50:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.719 22:50:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:24.719 22:50:52 -- common/autotest_common.sh@10 -- # set +x 00:08:24.719 22:50:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:24.719 [2024-06-09 22:50:52.310530] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:24.719 [2024-06-09 22:50:52.310600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.720 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.720 [2024-06-09 22:50:52.380689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.720 [2024-06-09 22:50:52.453942] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:24.720 [2024-06-09 22:50:52.454075] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.720 [2024-06-09 22:50:52.454086] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.720 [2024-06-09 22:50:52.454095] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.720 [2024-06-09 22:50:52.454227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.720 [2024-06-09 22:50:52.454343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.720 [2024-06-09 22:50:52.454505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.720 [2024-06-09 22:50:52.454504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.980 22:50:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:24.980 22:50:53 -- common/autotest_common.sh@852 -- # return 0 00:08:24.980 22:50:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:24.980 22:50:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:24.980 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:08:24.980 22:50:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.980 22:50:53 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.980 22:50:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.980 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:08:24.980 [2024-06-09 22:50:53.129601] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.980 22:50:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.980 22:50:53 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:24.980 22:50:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.980 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:08:24.980 [2024-06-09 22:50:53.145792] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:24.980 22:50:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:24.980 22:50:53 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:24.980 22:50:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:24.980 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:08:25.241 22:50:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.241 22:50:53 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:25.241 22:50:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.241 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:08:25.241 22:50:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.241 22:50:53 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:25.241 22:50:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.241 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:08:25.241 22:50:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.241 22:50:53 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.241 22:50:53 -- target/referrals.sh@48 -- # jq length 00:08:25.241 22:50:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.242 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:08:25.242 22:50:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.242 22:50:53 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:25.242 22:50:53 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:25.242 22:50:53 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:25.242 22:50:53 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.242 22:50:53 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:25.242 22:50:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.242 22:50:53 -- target/referrals.sh@21 -- # sort 00:08:25.242 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:08:25.242 22:50:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.242 22:50:53 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:25.242 22:50:53 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:25.242 22:50:53 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:25.242 22:50:53 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.242 22:50:53 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.242 22:50:53 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.242 22:50:53 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.242 22:50:53 -- target/referrals.sh@26 -- # sort 00:08:25.503 22:50:53 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:25.503 22:50:53 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:25.503 22:50:53 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:25.503 22:50:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.503 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:08:25.503 22:50:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.503 22:50:53 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:25.503 22:50:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.503 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:08:25.503 22:50:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.503 22:50:53 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:25.503 22:50:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.503 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:08:25.504 22:50:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.504 22:50:53 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.504 22:50:53 -- target/referrals.sh@56 -- # jq length 00:08:25.504 22:50:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.504 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:08:25.504 22:50:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.504 22:50:53 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:25.504 22:50:53 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:25.504 22:50:53 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.504 22:50:53 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.504 22:50:53 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.504 22:50:53 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.504 22:50:53 -- target/referrals.sh@26 -- # sort 00:08:25.504 22:50:53 -- target/referrals.sh@26 -- # echo 00:08:25.504 22:50:53 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:25.504 22:50:53 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:25.504 22:50:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.504 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:08:25.504 22:50:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.504 22:50:53 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:25.504 22:50:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.504 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:08:25.504 22:50:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.504 22:50:53 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:25.504 22:50:53 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:25.504 22:50:53 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.504 22:50:53 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:25.504 22:50:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:25.504 22:50:53 -- target/referrals.sh@21 -- # sort 00:08:25.504 22:50:53 -- common/autotest_common.sh@10 -- # set +x 00:08:25.504 22:50:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:25.765 22:50:53 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:25.765 22:50:53 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:25.765 22:50:53 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:25.765 22:50:53 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.765 22:50:53 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.765 22:50:53 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.765 22:50:53 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.765 22:50:53 -- target/referrals.sh@26 -- # sort 00:08:25.765 22:50:53 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:25.765 22:50:53 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:25.765 22:50:53 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:25.765 22:50:53 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:25.765 22:50:53 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:25.765 22:50:53 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.765 22:50:53 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:26.027 22:50:54 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:26.027 22:50:54 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:26.027 22:50:54 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:26.027 22:50:54 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:26.027 22:50:54 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.027 22:50:54 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:26.288 22:50:54 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:26.288 22:50:54 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:26.288 22:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.288 22:50:54 -- common/autotest_common.sh@10 -- # set +x 00:08:26.288 22:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.288 22:50:54 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:26.288 22:50:54 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:26.288 22:50:54 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.288 22:50:54 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:26.288 22:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.288 22:50:54 -- target/referrals.sh@21 -- # sort 00:08:26.288 22:50:54 -- common/autotest_common.sh@10 -- # set +x 00:08:26.288 22:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.288 22:50:54 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:26.288 22:50:54 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:26.288 22:50:54 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:26.288 22:50:54 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.288 22:50:54 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.288 22:50:54 -- target/referrals.sh@26 -- # sort 00:08:26.288 22:50:54 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.288 22:50:54 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.288 22:50:54 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:26.288 22:50:54 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:26.288 22:50:54 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:26.288 22:50:54 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:26.288 22:50:54 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:26.288 22:50:54 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.288 22:50:54 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:26.548 22:50:54 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:26.548 22:50:54 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:26.548 22:50:54 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:26.548 22:50:54 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:26.548 22:50:54 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.548 22:50:54 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:26.548 22:50:54 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:26.548 22:50:54 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:26.548 22:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.548 22:50:54 -- common/autotest_common.sh@10 -- # set +x 00:08:26.548 22:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.548 22:50:54 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:26.548 22:50:54 -- target/referrals.sh@82 -- # jq length 00:08:26.548 22:50:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:26.548 22:50:54 -- common/autotest_common.sh@10 -- # set +x 00:08:26.548 22:50:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:26.548 22:50:54 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:26.548 22:50:54 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:26.548 22:50:54 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:26.548 22:50:54 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:26.548 22:50:54 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:26.548 22:50:54 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:26.548 22:50:54 -- target/referrals.sh@26 -- # sort 00:08:26.807 22:50:54 -- target/referrals.sh@26 -- # echo 00:08:26.807 22:50:54 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:26.808 22:50:54 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:26.808 22:50:54 -- target/referrals.sh@86 -- # nvmftestfini 00:08:26.808 22:50:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:26.808 22:50:54 -- nvmf/common.sh@116 -- # sync 00:08:26.808 22:50:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:26.808 22:50:54 -- nvmf/common.sh@119 -- # set +e 00:08:26.808 22:50:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:26.808 22:50:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:26.808 rmmod nvme_tcp 00:08:26.808 rmmod nvme_fabrics 00:08:26.808 rmmod nvme_keyring 00:08:26.808 22:50:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:26.808 22:50:54 -- nvmf/common.sh@123 -- # set -e 00:08:26.808 22:50:54 -- nvmf/common.sh@124 -- # return 0 00:08:26.808 22:50:54 -- nvmf/common.sh@477 -- # '[' -n 3929147 ']' 00:08:26.808 22:50:54 -- nvmf/common.sh@478 -- # killprocess 3929147 00:08:26.808 22:50:54 -- common/autotest_common.sh@926 -- # '[' -z 3929147 ']' 00:08:26.808 22:50:54 -- common/autotest_common.sh@930 -- # kill -0 3929147 00:08:26.808 22:50:54 -- common/autotest_common.sh@931 -- # uname 00:08:26.808 22:50:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:26.808 22:50:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3929147 00:08:26.808 22:50:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:26.808 22:50:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:26.808 22:50:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3929147' 00:08:26.808 killing process with pid 3929147 00:08:26.808 22:50:54 -- common/autotest_common.sh@945 -- # kill 3929147 00:08:26.808 22:50:54 -- common/autotest_common.sh@950 -- # wait 3929147 00:08:27.068 22:50:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:27.068 22:50:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:27.068 22:50:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:27.068 22:50:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.068 22:50:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:27.068 22:50:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.068 22:50:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.068 22:50:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.983 22:50:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:28.983 00:08:28.983 real 0m11.144s 00:08:28.983 user 0m13.007s 00:08:28.983 sys 0m5.255s 00:08:28.983 22:50:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.983 22:50:57 -- common/autotest_common.sh@10 -- # set +x 00:08:28.983 ************************************ 00:08:28.983 END TEST nvmf_referrals 00:08:28.983 ************************************ 00:08:29.245 22:50:57 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:29.245 22:50:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:29.245 22:50:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.245 22:50:57 -- common/autotest_common.sh@10 -- # set +x 00:08:29.245 ************************************ 00:08:29.245 START TEST nvmf_connect_disconnect 00:08:29.245 ************************************ 00:08:29.245 22:50:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:29.245 * Looking for test storage... 00:08:29.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.245 22:50:57 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.245 22:50:57 -- nvmf/common.sh@7 -- # uname -s 00:08:29.245 22:50:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.245 22:50:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.245 22:50:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.245 22:50:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.245 22:50:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.245 22:50:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.245 22:50:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.245 22:50:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.245 22:50:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.245 22:50:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.245 22:50:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:29.245 22:50:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:29.245 22:50:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.245 22:50:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.245 22:50:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.245 22:50:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.245 22:50:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.245 22:50:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.245 22:50:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.245 22:50:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.245 22:50:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.245 22:50:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.245 22:50:57 -- paths/export.sh@5 -- # export PATH 00:08:29.245 22:50:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.245 22:50:57 -- nvmf/common.sh@46 -- # : 0 00:08:29.245 22:50:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:29.245 22:50:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:29.245 22:50:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:29.245 22:50:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.245 22:50:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.245 22:50:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:29.245 22:50:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:29.245 22:50:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:29.245 22:50:57 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.245 22:50:57 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.245 22:50:57 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:29.245 22:50:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:29.245 22:50:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.245 22:50:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:29.245 22:50:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:29.245 22:50:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:29.245 22:50:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.245 22:50:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.245 22:50:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.245 22:50:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:29.245 22:50:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:29.245 22:50:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:29.245 22:50:57 -- common/autotest_common.sh@10 -- # set +x 00:08:35.833 22:51:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:35.833 22:51:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:35.833 22:51:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:35.833 22:51:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:35.833 22:51:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:35.833 22:51:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:35.833 22:51:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:35.833 22:51:04 -- nvmf/common.sh@294 -- # net_devs=() 00:08:36.095 22:51:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:36.095 22:51:04 -- nvmf/common.sh@295 -- # e810=() 00:08:36.095 22:51:04 -- nvmf/common.sh@295 -- # local -ga e810 00:08:36.095 22:51:04 -- nvmf/common.sh@296 -- # x722=() 00:08:36.095 22:51:04 -- nvmf/common.sh@296 -- # local -ga x722 00:08:36.095 22:51:04 -- nvmf/common.sh@297 -- # mlx=() 00:08:36.095 22:51:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:36.095 22:51:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.095 22:51:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.095 22:51:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.095 22:51:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.095 22:51:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.095 22:51:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.095 22:51:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.095 22:51:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.095 22:51:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.095 22:51:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.095 22:51:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.095 22:51:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:36.095 22:51:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:36.095 22:51:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:36.095 22:51:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:36.095 22:51:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:36.095 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:36.095 22:51:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:36.095 22:51:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:36.095 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:36.095 22:51:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:36.095 22:51:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:36.095 22:51:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.095 22:51:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:36.095 22:51:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.095 22:51:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:36.095 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:36.095 22:51:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.095 22:51:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:36.095 22:51:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.095 22:51:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:36.095 22:51:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.095 22:51:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:36.095 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:36.095 22:51:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.095 22:51:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:36.095 22:51:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:36.095 22:51:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:36.095 22:51:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:36.095 22:51:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.095 22:51:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.095 22:51:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.095 22:51:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:36.095 22:51:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.095 22:51:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.095 22:51:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:36.095 22:51:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.095 22:51:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.095 22:51:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:36.095 22:51:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:36.095 22:51:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.095 22:51:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.095 22:51:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.095 22:51:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.095 22:51:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:36.095 22:51:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.357 22:51:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.357 22:51:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.357 22:51:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:36.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.802 ms 00:08:36.357 00:08:36.357 --- 10.0.0.2 ping statistics --- 00:08:36.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.357 rtt min/avg/max/mdev = 0.802/0.802/0.802/0.000 ms 00:08:36.357 22:51:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.434 ms 00:08:36.357 00:08:36.357 --- 10.0.0.1 ping statistics --- 00:08:36.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.357 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:08:36.357 22:51:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.357 22:51:04 -- nvmf/common.sh@410 -- # return 0 00:08:36.357 22:51:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:36.357 22:51:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.357 22:51:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:36.357 22:51:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:36.357 22:51:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.357 22:51:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:36.357 22:51:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:36.357 22:51:04 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:36.357 22:51:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:36.357 22:51:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:36.357 22:51:04 -- common/autotest_common.sh@10 -- # set +x 00:08:36.357 22:51:04 -- nvmf/common.sh@469 -- # nvmfpid=3934086 00:08:36.357 22:51:04 -- nvmf/common.sh@470 -- # waitforlisten 3934086 00:08:36.357 22:51:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:36.357 22:51:04 -- common/autotest_common.sh@819 -- # '[' -z 3934086 ']' 00:08:36.357 22:51:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.357 22:51:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:36.357 22:51:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.357 22:51:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:36.357 22:51:04 -- common/autotest_common.sh@10 -- # set +x 00:08:36.357 [2024-06-09 22:51:04.452176] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:36.357 [2024-06-09 22:51:04.452242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.357 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.357 [2024-06-09 22:51:04.524812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.618 [2024-06-09 22:51:04.597884] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:36.618 [2024-06-09 22:51:04.598029] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.618 [2024-06-09 22:51:04.598039] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.618 [2024-06-09 22:51:04.598048] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.618 [2024-06-09 22:51:04.598164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.618 [2024-06-09 22:51:04.598280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.618 [2024-06-09 22:51:04.598452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.618 [2024-06-09 22:51:04.598452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.191 22:51:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:37.191 22:51:05 -- common/autotest_common.sh@852 -- # return 0 00:08:37.191 22:51:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:37.191 22:51:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:37.191 22:51:05 -- common/autotest_common.sh@10 -- # set +x 00:08:37.191 22:51:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.191 22:51:05 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:37.191 22:51:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.191 22:51:05 -- common/autotest_common.sh@10 -- # set +x 00:08:37.191 [2024-06-09 22:51:05.280553] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.191 22:51:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.191 22:51:05 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:37.191 22:51:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.191 22:51:05 -- common/autotest_common.sh@10 -- # set +x 00:08:37.191 22:51:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.191 22:51:05 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:37.191 22:51:05 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:37.191 22:51:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.191 22:51:05 -- common/autotest_common.sh@10 -- # set +x 00:08:37.191 22:51:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.191 22:51:05 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:37.191 22:51:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.191 22:51:05 -- common/autotest_common.sh@10 -- # set +x 00:08:37.191 22:51:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.191 22:51:05 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.191 22:51:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.191 22:51:05 -- common/autotest_common.sh@10 -- # set +x 00:08:37.191 [2024-06-09 22:51:05.339994] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.191 22:51:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.191 22:51:05 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:37.191 22:51:05 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:37.191 22:51:05 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:37.191 22:51:05 -- target/connect_disconnect.sh@34 -- # set +x 00:08:39.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.251 22:54:58 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:30.251 22:54:58 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:30.251 22:54:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:30.251 22:54:58 -- nvmf/common.sh@116 -- # sync 00:12:30.251 22:54:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:30.251 22:54:58 -- nvmf/common.sh@119 -- # set +e 00:12:30.251 22:54:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:30.251 22:54:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:30.251 rmmod nvme_tcp 00:12:30.251 rmmod nvme_fabrics 00:12:30.251 rmmod nvme_keyring 00:12:30.251 22:54:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:30.251 22:54:58 -- nvmf/common.sh@123 -- # set -e 00:12:30.251 22:54:58 -- nvmf/common.sh@124 -- # return 0 00:12:30.251 22:54:58 -- nvmf/common.sh@477 -- # '[' -n 3934086 ']' 00:12:30.251 22:54:58 -- nvmf/common.sh@478 -- # killprocess 3934086 00:12:30.251 22:54:58 -- common/autotest_common.sh@926 -- # '[' -z 3934086 ']' 00:12:30.251 22:54:58 -- common/autotest_common.sh@930 -- # kill -0 3934086 00:12:30.251 22:54:58 -- common/autotest_common.sh@931 -- # uname 00:12:30.251 22:54:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:30.251 22:54:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3934086 00:12:30.251 22:54:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:30.251 22:54:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:30.251 22:54:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3934086' 00:12:30.251 killing process with pid 3934086 00:12:30.251 22:54:58 -- common/autotest_common.sh@945 -- # kill 3934086 00:12:30.251 22:54:58 -- common/autotest_common.sh@950 -- # wait 3934086 00:12:30.513 22:54:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:30.513 22:54:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:30.513 22:54:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:30.513 22:54:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.513 22:54:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:30.513 22:54:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.513 22:54:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.513 22:54:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.428 22:55:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:32.428 00:12:32.428 real 4m3.409s 00:12:32.428 user 15m29.212s 00:12:32.428 sys 0m22.106s 00:12:32.428 22:55:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.428 22:55:00 -- common/autotest_common.sh@10 -- # set +x 00:12:32.428 ************************************ 00:12:32.428 END TEST nvmf_connect_disconnect 00:12:32.428 ************************************ 00:12:32.688 22:55:00 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:32.689 22:55:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:32.689 22:55:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:32.689 22:55:00 -- common/autotest_common.sh@10 -- # set +x 00:12:32.689 ************************************ 00:12:32.689 START TEST nvmf_multitarget 00:12:32.689 ************************************ 00:12:32.689 22:55:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:32.689 * Looking for test storage... 00:12:32.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.689 22:55:00 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.689 22:55:00 -- nvmf/common.sh@7 -- # uname -s 00:12:32.689 22:55:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.689 22:55:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.689 22:55:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.689 22:55:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.689 22:55:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.689 22:55:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.689 22:55:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.689 22:55:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.689 22:55:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.689 22:55:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.689 22:55:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:32.689 22:55:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:32.689 22:55:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.689 22:55:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.689 22:55:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.689 22:55:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.689 22:55:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.689 22:55:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.689 22:55:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.689 22:55:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.689 22:55:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.689 22:55:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.689 22:55:00 -- paths/export.sh@5 -- # export PATH 00:12:32.689 22:55:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.689 22:55:00 -- nvmf/common.sh@46 -- # : 0 00:12:32.689 22:55:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:32.689 22:55:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:32.689 22:55:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:32.689 22:55:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.689 22:55:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.689 22:55:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:32.689 22:55:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:32.689 22:55:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:32.689 22:55:00 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:32.689 22:55:00 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:32.689 22:55:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:32.689 22:55:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.689 22:55:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:32.689 22:55:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:32.689 22:55:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:32.689 22:55:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.689 22:55:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.689 22:55:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.689 22:55:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:32.689 22:55:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:32.689 22:55:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:32.689 22:55:00 -- common/autotest_common.sh@10 -- # set +x 00:12:39.318 22:55:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:39.318 22:55:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:39.318 22:55:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:39.318 22:55:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:39.318 22:55:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:39.318 22:55:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:39.318 22:55:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:39.318 22:55:07 -- nvmf/common.sh@294 -- # net_devs=() 00:12:39.318 22:55:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:39.318 22:55:07 -- nvmf/common.sh@295 -- # e810=() 00:12:39.318 22:55:07 -- nvmf/common.sh@295 -- # local -ga e810 00:12:39.318 22:55:07 -- nvmf/common.sh@296 -- # x722=() 00:12:39.318 22:55:07 -- nvmf/common.sh@296 -- # local -ga x722 00:12:39.318 22:55:07 -- nvmf/common.sh@297 -- # mlx=() 00:12:39.318 22:55:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:39.318 22:55:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.318 22:55:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.318 22:55:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.318 22:55:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.318 22:55:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.318 22:55:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.318 22:55:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.318 22:55:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.318 22:55:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.318 22:55:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.318 22:55:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.318 22:55:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:39.318 22:55:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:39.318 22:55:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:39.318 22:55:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:39.318 22:55:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:39.318 22:55:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:39.318 22:55:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:39.318 22:55:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:39.318 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:39.318 22:55:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:39.318 22:55:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:39.318 22:55:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.318 22:55:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.318 22:55:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:39.319 22:55:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:39.319 22:55:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:39.319 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:39.319 22:55:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:39.319 22:55:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:39.319 22:55:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.319 22:55:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.319 22:55:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:39.319 22:55:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:39.319 22:55:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:39.319 22:55:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:39.319 22:55:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:39.319 22:55:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.319 22:55:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:39.319 22:55:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.319 22:55:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:39.319 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:39.319 22:55:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.319 22:55:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:39.319 22:55:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.319 22:55:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:39.319 22:55:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.319 22:55:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:39.319 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:39.319 22:55:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.319 22:55:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:39.319 22:55:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:39.319 22:55:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:39.319 22:55:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:39.319 22:55:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:39.319 22:55:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.319 22:55:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.319 22:55:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.319 22:55:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:39.319 22:55:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.319 22:55:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.319 22:55:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:39.319 22:55:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.319 22:55:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.319 22:55:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:39.319 22:55:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:39.319 22:55:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.319 22:55:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.581 22:55:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.581 22:55:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.581 22:55:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:39.581 22:55:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.581 22:55:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.581 22:55:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.581 22:55:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:39.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:12:39.581 00:12:39.581 --- 10.0.0.2 ping statistics --- 00:12:39.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.581 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:12:39.581 22:55:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.418 ms 00:12:39.581 00:12:39.581 --- 10.0.0.1 ping statistics --- 00:12:39.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.581 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:12:39.581 22:55:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.581 22:55:07 -- nvmf/common.sh@410 -- # return 0 00:12:39.581 22:55:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:39.581 22:55:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.581 22:55:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:39.581 22:55:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:39.581 22:55:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.581 22:55:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:39.581 22:55:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:39.842 22:55:07 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:39.842 22:55:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:39.842 22:55:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:39.842 22:55:07 -- common/autotest_common.sh@10 -- # set +x 00:12:39.842 22:55:07 -- nvmf/common.sh@469 -- # nvmfpid=3986942 00:12:39.842 22:55:07 -- nvmf/common.sh@470 -- # waitforlisten 3986942 00:12:39.842 22:55:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.842 22:55:07 -- common/autotest_common.sh@819 -- # '[' -z 3986942 ']' 00:12:39.842 22:55:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.842 22:55:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:39.842 22:55:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.842 22:55:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:39.842 22:55:07 -- common/autotest_common.sh@10 -- # set +x 00:12:39.842 [2024-06-09 22:55:07.820859] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:39.842 [2024-06-09 22:55:07.820926] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.842 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.842 [2024-06-09 22:55:07.890722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.842 [2024-06-09 22:55:07.964682] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:39.842 [2024-06-09 22:55:07.964815] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.842 [2024-06-09 22:55:07.964825] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.842 [2024-06-09 22:55:07.964833] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.842 [2024-06-09 22:55:07.964976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.843 [2024-06-09 22:55:07.965095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.843 [2024-06-09 22:55:07.965254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.843 [2024-06-09 22:55:07.965255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.786 22:55:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:40.786 22:55:08 -- common/autotest_common.sh@852 -- # return 0 00:12:40.786 22:55:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:40.786 22:55:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:40.786 22:55:08 -- common/autotest_common.sh@10 -- # set +x 00:12:40.786 22:55:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.786 22:55:08 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:40.786 22:55:08 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:40.786 22:55:08 -- target/multitarget.sh@21 -- # jq length 00:12:40.786 22:55:08 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:40.786 22:55:08 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:40.786 "nvmf_tgt_1" 00:12:40.786 22:55:08 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:40.786 "nvmf_tgt_2" 00:12:40.786 22:55:08 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:40.786 22:55:08 -- target/multitarget.sh@28 -- # jq length 00:12:41.047 22:55:09 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:41.047 22:55:09 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:41.047 true 00:12:41.047 22:55:09 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:41.047 true 00:12:41.308 22:55:09 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:41.308 22:55:09 -- target/multitarget.sh@35 -- # jq length 00:12:41.308 22:55:09 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:41.308 22:55:09 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:41.308 22:55:09 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:41.308 22:55:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:41.308 22:55:09 -- nvmf/common.sh@116 -- # sync 00:12:41.308 22:55:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:41.308 22:55:09 -- nvmf/common.sh@119 -- # set +e 00:12:41.308 22:55:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:41.308 22:55:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:41.308 rmmod nvme_tcp 00:12:41.308 rmmod nvme_fabrics 00:12:41.308 rmmod nvme_keyring 00:12:41.308 22:55:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:41.308 22:55:09 -- nvmf/common.sh@123 -- # set -e 00:12:41.308 22:55:09 -- nvmf/common.sh@124 -- # return 0 00:12:41.308 22:55:09 -- nvmf/common.sh@477 -- # '[' -n 3986942 ']' 00:12:41.308 22:55:09 -- nvmf/common.sh@478 -- # killprocess 3986942 00:12:41.308 22:55:09 -- common/autotest_common.sh@926 -- # '[' -z 3986942 ']' 00:12:41.308 22:55:09 -- common/autotest_common.sh@930 -- # kill -0 3986942 00:12:41.308 22:55:09 -- common/autotest_common.sh@931 -- # uname 00:12:41.308 22:55:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:41.308 22:55:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3986942 00:12:41.308 22:55:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:41.308 22:55:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:41.308 22:55:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3986942' 00:12:41.308 killing process with pid 3986942 00:12:41.308 22:55:09 -- common/autotest_common.sh@945 -- # kill 3986942 00:12:41.308 22:55:09 -- common/autotest_common.sh@950 -- # wait 3986942 00:12:41.570 22:55:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:41.570 22:55:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:41.570 22:55:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:41.570 22:55:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.570 22:55:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:41.570 22:55:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.570 22:55:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.570 22:55:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.488 22:55:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:43.488 00:12:43.488 real 0m11.013s 00:12:43.488 user 0m9.132s 00:12:43.488 sys 0m5.609s 00:12:43.488 22:55:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:43.488 22:55:11 -- common/autotest_common.sh@10 -- # set +x 00:12:43.488 ************************************ 00:12:43.488 END TEST nvmf_multitarget 00:12:43.488 ************************************ 00:12:43.750 22:55:11 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:43.750 22:55:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:43.750 22:55:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:43.750 22:55:11 -- common/autotest_common.sh@10 -- # set +x 00:12:43.750 ************************************ 00:12:43.750 START TEST nvmf_rpc 00:12:43.750 ************************************ 00:12:43.750 22:55:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:43.750 * Looking for test storage... 00:12:43.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:43.750 22:55:11 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:43.750 22:55:11 -- nvmf/common.sh@7 -- # uname -s 00:12:43.750 22:55:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.750 22:55:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.750 22:55:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.750 22:55:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.750 22:55:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.750 22:55:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.750 22:55:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.750 22:55:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.750 22:55:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.750 22:55:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.750 22:55:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:43.750 22:55:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:43.750 22:55:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.750 22:55:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.750 22:55:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:43.751 22:55:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:43.751 22:55:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.751 22:55:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.751 22:55:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.751 22:55:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.751 22:55:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.751 22:55:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.751 22:55:11 -- paths/export.sh@5 -- # export PATH 00:12:43.751 22:55:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.751 22:55:11 -- nvmf/common.sh@46 -- # : 0 00:12:43.751 22:55:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:43.751 22:55:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:43.751 22:55:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:43.751 22:55:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.751 22:55:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.751 22:55:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:43.751 22:55:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:43.751 22:55:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:43.751 22:55:11 -- target/rpc.sh@11 -- # loops=5 00:12:43.751 22:55:11 -- target/rpc.sh@23 -- # nvmftestinit 00:12:43.751 22:55:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:43.751 22:55:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.751 22:55:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:43.751 22:55:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:43.751 22:55:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:43.751 22:55:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.751 22:55:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.751 22:55:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.751 22:55:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:43.751 22:55:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:43.751 22:55:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:43.751 22:55:11 -- common/autotest_common.sh@10 -- # set +x 00:12:51.902 22:55:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:51.902 22:55:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:51.902 22:55:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:51.902 22:55:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:51.902 22:55:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:51.902 22:55:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:51.902 22:55:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:51.902 22:55:18 -- nvmf/common.sh@294 -- # net_devs=() 00:12:51.902 22:55:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:51.902 22:55:18 -- nvmf/common.sh@295 -- # e810=() 00:12:51.902 22:55:18 -- nvmf/common.sh@295 -- # local -ga e810 00:12:51.902 22:55:18 -- nvmf/common.sh@296 -- # x722=() 00:12:51.902 22:55:18 -- nvmf/common.sh@296 -- # local -ga x722 00:12:51.902 22:55:18 -- nvmf/common.sh@297 -- # mlx=() 00:12:51.902 22:55:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:51.902 22:55:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.902 22:55:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.902 22:55:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.902 22:55:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.902 22:55:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.902 22:55:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.902 22:55:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.902 22:55:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.902 22:55:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.902 22:55:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.902 22:55:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.902 22:55:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:51.902 22:55:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:51.902 22:55:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:51.902 22:55:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:51.902 22:55:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:51.902 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:51.902 22:55:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:51.902 22:55:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:51.902 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:51.902 22:55:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:51.902 22:55:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:51.902 22:55:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.902 22:55:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:51.902 22:55:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.902 22:55:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:51.902 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:51.902 22:55:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.902 22:55:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:51.902 22:55:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.902 22:55:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:51.902 22:55:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.902 22:55:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:51.902 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:51.902 22:55:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.902 22:55:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:51.902 22:55:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:51.902 22:55:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:51.902 22:55:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:51.902 22:55:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.902 22:55:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.902 22:55:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.902 22:55:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:51.902 22:55:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.902 22:55:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.902 22:55:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:51.902 22:55:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.902 22:55:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.902 22:55:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:51.902 22:55:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:51.902 22:55:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.902 22:55:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.902 22:55:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.902 22:55:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.902 22:55:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:51.902 22:55:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.902 22:55:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.902 22:55:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.903 22:55:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:51.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:12:51.903 00:12:51.903 --- 10.0.0.2 ping statistics --- 00:12:51.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.903 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:12:51.903 22:55:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:12:51.903 00:12:51.903 --- 10.0.0.1 ping statistics --- 00:12:51.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.903 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:12:51.903 22:55:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.903 22:55:18 -- nvmf/common.sh@410 -- # return 0 00:12:51.903 22:55:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:51.903 22:55:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.903 22:55:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:51.903 22:55:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:51.903 22:55:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.903 22:55:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:51.903 22:55:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:51.903 22:55:18 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:51.903 22:55:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:51.903 22:55:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:51.903 22:55:18 -- common/autotest_common.sh@10 -- # set +x 00:12:51.903 22:55:18 -- nvmf/common.sh@469 -- # nvmfpid=3991595 00:12:51.903 22:55:18 -- nvmf/common.sh@470 -- # waitforlisten 3991595 00:12:51.903 22:55:18 -- common/autotest_common.sh@819 -- # '[' -z 3991595 ']' 00:12:51.903 22:55:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.903 22:55:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:51.903 22:55:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.903 22:55:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:51.903 22:55:18 -- common/autotest_common.sh@10 -- # set +x 00:12:51.903 22:55:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.903 [2024-06-09 22:55:18.991291] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:51.903 [2024-06-09 22:55:18.991353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.903 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.903 [2024-06-09 22:55:19.061063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.903 [2024-06-09 22:55:19.134700] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:51.903 [2024-06-09 22:55:19.134831] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.903 [2024-06-09 22:55:19.134842] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.903 [2024-06-09 22:55:19.134850] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.903 [2024-06-09 22:55:19.135004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.903 [2024-06-09 22:55:19.135127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.903 [2024-06-09 22:55:19.135290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.903 [2024-06-09 22:55:19.135291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.903 22:55:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:51.903 22:55:19 -- common/autotest_common.sh@852 -- # return 0 00:12:51.903 22:55:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:51.903 22:55:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:51.903 22:55:19 -- common/autotest_common.sh@10 -- # set +x 00:12:51.903 22:55:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.903 22:55:19 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:51.903 22:55:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.903 22:55:19 -- common/autotest_common.sh@10 -- # set +x 00:12:51.903 22:55:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.903 22:55:19 -- target/rpc.sh@26 -- # stats='{ 00:12:51.903 "tick_rate": 2400000000, 00:12:51.903 "poll_groups": [ 00:12:51.903 { 00:12:51.903 "name": "nvmf_tgt_poll_group_0", 00:12:51.903 "admin_qpairs": 0, 00:12:51.903 "io_qpairs": 0, 00:12:51.903 "current_admin_qpairs": 0, 00:12:51.903 "current_io_qpairs": 0, 00:12:51.903 "pending_bdev_io": 0, 00:12:51.903 "completed_nvme_io": 0, 00:12:51.903 "transports": [] 00:12:51.903 }, 00:12:51.903 { 00:12:51.903 "name": "nvmf_tgt_poll_group_1", 00:12:51.903 "admin_qpairs": 0, 00:12:51.903 "io_qpairs": 0, 00:12:51.903 "current_admin_qpairs": 0, 00:12:51.903 "current_io_qpairs": 0, 00:12:51.903 "pending_bdev_io": 0, 00:12:51.903 "completed_nvme_io": 0, 00:12:51.903 "transports": [] 00:12:51.903 }, 00:12:51.903 { 00:12:51.903 "name": "nvmf_tgt_poll_group_2", 00:12:51.903 "admin_qpairs": 0, 00:12:51.903 "io_qpairs": 0, 00:12:51.903 "current_admin_qpairs": 0, 00:12:51.903 "current_io_qpairs": 0, 00:12:51.903 "pending_bdev_io": 0, 00:12:51.903 "completed_nvme_io": 0, 00:12:51.903 "transports": [] 00:12:51.903 }, 00:12:51.903 { 00:12:51.903 "name": "nvmf_tgt_poll_group_3", 00:12:51.903 "admin_qpairs": 0, 00:12:51.903 "io_qpairs": 0, 00:12:51.903 "current_admin_qpairs": 0, 00:12:51.903 "current_io_qpairs": 0, 00:12:51.903 "pending_bdev_io": 0, 00:12:51.903 "completed_nvme_io": 0, 00:12:51.903 "transports": [] 00:12:51.903 } 00:12:51.903 ] 00:12:51.903 }' 00:12:51.903 22:55:19 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:51.903 22:55:19 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:51.903 22:55:19 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:51.903 22:55:19 -- target/rpc.sh@15 -- # wc -l 00:12:51.903 22:55:19 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:51.903 22:55:19 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:51.903 22:55:19 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:51.903 22:55:19 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:51.903 22:55:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.903 22:55:19 -- common/autotest_common.sh@10 -- # set +x 00:12:51.903 [2024-06-09 22:55:19.928916] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.903 22:55:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.903 22:55:19 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:51.903 22:55:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.903 22:55:19 -- common/autotest_common.sh@10 -- # set +x 00:12:51.903 22:55:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.903 22:55:19 -- target/rpc.sh@33 -- # stats='{ 00:12:51.903 "tick_rate": 2400000000, 00:12:51.903 "poll_groups": [ 00:12:51.903 { 00:12:51.903 "name": "nvmf_tgt_poll_group_0", 00:12:51.903 "admin_qpairs": 0, 00:12:51.903 "io_qpairs": 0, 00:12:51.903 "current_admin_qpairs": 0, 00:12:51.903 "current_io_qpairs": 0, 00:12:51.903 "pending_bdev_io": 0, 00:12:51.903 "completed_nvme_io": 0, 00:12:51.903 "transports": [ 00:12:51.903 { 00:12:51.903 "trtype": "TCP" 00:12:51.903 } 00:12:51.903 ] 00:12:51.903 }, 00:12:51.903 { 00:12:51.903 "name": "nvmf_tgt_poll_group_1", 00:12:51.903 "admin_qpairs": 0, 00:12:51.903 "io_qpairs": 0, 00:12:51.903 "current_admin_qpairs": 0, 00:12:51.903 "current_io_qpairs": 0, 00:12:51.903 "pending_bdev_io": 0, 00:12:51.903 "completed_nvme_io": 0, 00:12:51.903 "transports": [ 00:12:51.903 { 00:12:51.903 "trtype": "TCP" 00:12:51.903 } 00:12:51.903 ] 00:12:51.903 }, 00:12:51.903 { 00:12:51.903 "name": "nvmf_tgt_poll_group_2", 00:12:51.903 "admin_qpairs": 0, 00:12:51.903 "io_qpairs": 0, 00:12:51.903 "current_admin_qpairs": 0, 00:12:51.903 "current_io_qpairs": 0, 00:12:51.903 "pending_bdev_io": 0, 00:12:51.903 "completed_nvme_io": 0, 00:12:51.903 "transports": [ 00:12:51.903 { 00:12:51.903 "trtype": "TCP" 00:12:51.903 } 00:12:51.903 ] 00:12:51.903 }, 00:12:51.903 { 00:12:51.903 "name": "nvmf_tgt_poll_group_3", 00:12:51.903 "admin_qpairs": 0, 00:12:51.903 "io_qpairs": 0, 00:12:51.903 "current_admin_qpairs": 0, 00:12:51.903 "current_io_qpairs": 0, 00:12:51.903 "pending_bdev_io": 0, 00:12:51.903 "completed_nvme_io": 0, 00:12:51.903 "transports": [ 00:12:51.903 { 00:12:51.903 "trtype": "TCP" 00:12:51.903 } 00:12:51.903 ] 00:12:51.903 } 00:12:51.903 ] 00:12:51.903 }' 00:12:51.903 22:55:19 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:51.903 22:55:19 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:51.903 22:55:19 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:51.903 22:55:19 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:51.903 22:55:20 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:51.903 22:55:20 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:51.903 22:55:20 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:51.903 22:55:20 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:51.903 22:55:20 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:51.903 22:55:20 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:51.903 22:55:20 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:51.903 22:55:20 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:51.903 22:55:20 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:51.903 22:55:20 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:51.903 22:55:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.903 22:55:20 -- common/autotest_common.sh@10 -- # set +x 00:12:51.904 Malloc1 00:12:51.904 22:55:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.904 22:55:20 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:51.904 22:55:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.904 22:55:20 -- common/autotest_common.sh@10 -- # set +x 00:12:52.165 22:55:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.165 22:55:20 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.165 22:55:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.165 22:55:20 -- common/autotest_common.sh@10 -- # set +x 00:12:52.165 22:55:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.165 22:55:20 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:52.165 22:55:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.165 22:55:20 -- common/autotest_common.sh@10 -- # set +x 00:12:52.165 22:55:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.165 22:55:20 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.165 22:55:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.165 22:55:20 -- common/autotest_common.sh@10 -- # set +x 00:12:52.165 [2024-06-09 22:55:20.108679] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.165 22:55:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.165 22:55:20 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:52.165 22:55:20 -- common/autotest_common.sh@640 -- # local es=0 00:12:52.165 22:55:20 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:52.165 22:55:20 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:52.165 22:55:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:52.165 22:55:20 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:52.165 22:55:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:52.165 22:55:20 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:52.165 22:55:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:52.165 22:55:20 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:52.165 22:55:20 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:52.165 22:55:20 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:52.165 [2024-06-09 22:55:20.143715] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:52.165 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:52.165 could not add new controller: failed to write to nvme-fabrics device 00:12:52.165 22:55:20 -- common/autotest_common.sh@643 -- # es=1 00:12:52.165 22:55:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:52.165 22:55:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:52.165 22:55:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:52.165 22:55:20 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:52.165 22:55:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.165 22:55:20 -- common/autotest_common.sh@10 -- # set +x 00:12:52.165 22:55:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.165 22:55:20 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.080 22:55:21 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:54.080 22:55:21 -- common/autotest_common.sh@1177 -- # local i=0 00:12:54.080 22:55:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.080 22:55:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:54.080 22:55:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:55.993 22:55:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:55.993 22:55:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:55.993 22:55:23 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.993 22:55:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:55.993 22:55:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.993 22:55:23 -- common/autotest_common.sh@1187 -- # return 0 00:12:55.993 22:55:23 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.993 22:55:23 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.993 22:55:23 -- common/autotest_common.sh@1198 -- # local i=0 00:12:55.993 22:55:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:55.993 22:55:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.993 22:55:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:55.993 22:55:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.993 22:55:23 -- common/autotest_common.sh@1210 -- # return 0 00:12:55.993 22:55:23 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:55.993 22:55:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.993 22:55:23 -- common/autotest_common.sh@10 -- # set +x 00:12:55.993 22:55:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.993 22:55:23 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.993 22:55:23 -- common/autotest_common.sh@640 -- # local es=0 00:12:55.993 22:55:23 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.993 22:55:23 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:55.993 22:55:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:55.993 22:55:23 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:55.993 22:55:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:55.993 22:55:23 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:55.993 22:55:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:55.993 22:55:23 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:55.993 22:55:23 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:55.993 22:55:23 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.993 [2024-06-09 22:55:23.950537] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:55.993 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:55.993 could not add new controller: failed to write to nvme-fabrics device 00:12:55.993 22:55:23 -- common/autotest_common.sh@643 -- # es=1 00:12:55.993 22:55:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:55.993 22:55:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:55.993 22:55:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:55.993 22:55:23 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:55.993 22:55:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.993 22:55:23 -- common/autotest_common.sh@10 -- # set +x 00:12:55.993 22:55:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.993 22:55:23 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:57.381 22:55:25 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:57.381 22:55:25 -- common/autotest_common.sh@1177 -- # local i=0 00:12:57.381 22:55:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.381 22:55:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:57.381 22:55:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:59.352 22:55:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:59.352 22:55:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:59.352 22:55:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.352 22:55:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:59.352 22:55:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.352 22:55:27 -- common/autotest_common.sh@1187 -- # return 0 00:12:59.352 22:55:27 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.613 22:55:27 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.613 22:55:27 -- common/autotest_common.sh@1198 -- # local i=0 00:12:59.613 22:55:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:59.613 22:55:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.613 22:55:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:59.613 22:55:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.613 22:55:27 -- common/autotest_common.sh@1210 -- # return 0 00:12:59.613 22:55:27 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.613 22:55:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.613 22:55:27 -- common/autotest_common.sh@10 -- # set +x 00:12:59.613 22:55:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.613 22:55:27 -- target/rpc.sh@81 -- # seq 1 5 00:12:59.613 22:55:27 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:59.613 22:55:27 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:59.613 22:55:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.613 22:55:27 -- common/autotest_common.sh@10 -- # set +x 00:12:59.613 22:55:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.613 22:55:27 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.613 22:55:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.613 22:55:27 -- common/autotest_common.sh@10 -- # set +x 00:12:59.613 [2024-06-09 22:55:27.676263] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.613 22:55:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.613 22:55:27 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:59.613 22:55:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.613 22:55:27 -- common/autotest_common.sh@10 -- # set +x 00:12:59.613 22:55:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.613 22:55:27 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.613 22:55:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.613 22:55:27 -- common/autotest_common.sh@10 -- # set +x 00:12:59.613 22:55:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.613 22:55:27 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.531 22:55:29 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.531 22:55:29 -- common/autotest_common.sh@1177 -- # local i=0 00:13:01.531 22:55:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.531 22:55:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:01.531 22:55:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:03.447 22:55:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:03.447 22:55:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:03.447 22:55:31 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.447 22:55:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:03.447 22:55:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.447 22:55:31 -- common/autotest_common.sh@1187 -- # return 0 00:13:03.447 22:55:31 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.447 22:55:31 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.447 22:55:31 -- common/autotest_common.sh@1198 -- # local i=0 00:13:03.447 22:55:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:03.447 22:55:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.447 22:55:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:03.447 22:55:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.447 22:55:31 -- common/autotest_common.sh@1210 -- # return 0 00:13:03.447 22:55:31 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.447 22:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.447 22:55:31 -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 22:55:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.447 22:55:31 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.447 22:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.447 22:55:31 -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 22:55:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.447 22:55:31 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.447 22:55:31 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.447 22:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.447 22:55:31 -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 22:55:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.447 22:55:31 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.447 22:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.447 22:55:31 -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 [2024-06-09 22:55:31.403801] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.447 22:55:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.447 22:55:31 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.447 22:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.447 22:55:31 -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 22:55:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.447 22:55:31 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.447 22:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.447 22:55:31 -- common/autotest_common.sh@10 -- # set +x 00:13:03.447 22:55:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.447 22:55:31 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.834 22:55:33 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.834 22:55:33 -- common/autotest_common.sh@1177 -- # local i=0 00:13:04.834 22:55:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.834 22:55:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:04.834 22:55:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:07.383 22:55:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:07.383 22:55:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:07.383 22:55:35 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.383 22:55:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:07.383 22:55:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.383 22:55:35 -- common/autotest_common.sh@1187 -- # return 0 00:13:07.383 22:55:35 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.383 22:55:35 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.383 22:55:35 -- common/autotest_common.sh@1198 -- # local i=0 00:13:07.383 22:55:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:07.384 22:55:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.384 22:55:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:07.384 22:55:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.384 22:55:35 -- common/autotest_common.sh@1210 -- # return 0 00:13:07.384 22:55:35 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:07.384 22:55:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.384 22:55:35 -- common/autotest_common.sh@10 -- # set +x 00:13:07.384 22:55:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.384 22:55:35 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.384 22:55:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.384 22:55:35 -- common/autotest_common.sh@10 -- # set +x 00:13:07.384 22:55:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.384 22:55:35 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:07.384 22:55:35 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:07.384 22:55:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.384 22:55:35 -- common/autotest_common.sh@10 -- # set +x 00:13:07.384 22:55:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.384 22:55:35 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:07.384 22:55:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.384 22:55:35 -- common/autotest_common.sh@10 -- # set +x 00:13:07.384 [2024-06-09 22:55:35.172162] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:07.384 22:55:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.384 22:55:35 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:07.384 22:55:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.384 22:55:35 -- common/autotest_common.sh@10 -- # set +x 00:13:07.384 22:55:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.384 22:55:35 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:07.384 22:55:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.384 22:55:35 -- common/autotest_common.sh@10 -- # set +x 00:13:07.384 22:55:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.384 22:55:35 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.770 22:55:36 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.770 22:55:36 -- common/autotest_common.sh@1177 -- # local i=0 00:13:08.770 22:55:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.770 22:55:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:08.770 22:55:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:10.687 22:55:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:10.687 22:55:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:10.687 22:55:38 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.687 22:55:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:10.687 22:55:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.687 22:55:38 -- common/autotest_common.sh@1187 -- # return 0 00:13:10.687 22:55:38 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.687 22:55:38 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.687 22:55:38 -- common/autotest_common.sh@1198 -- # local i=0 00:13:10.687 22:55:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:10.687 22:55:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.687 22:55:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:10.687 22:55:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.687 22:55:38 -- common/autotest_common.sh@1210 -- # return 0 00:13:10.687 22:55:38 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:10.687 22:55:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.687 22:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:10.687 22:55:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.687 22:55:38 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.687 22:55:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.687 22:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:10.948 22:55:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.948 22:55:38 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.948 22:55:38 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.948 22:55:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.948 22:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:10.948 22:55:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.948 22:55:38 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.948 22:55:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.948 22:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:10.948 [2024-06-09 22:55:38.893780] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.948 22:55:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.948 22:55:38 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.948 22:55:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.948 22:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:10.948 22:55:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.948 22:55:38 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.948 22:55:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.948 22:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:10.948 22:55:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.948 22:55:38 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.337 22:55:40 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.337 22:55:40 -- common/autotest_common.sh@1177 -- # local i=0 00:13:12.337 22:55:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.337 22:55:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:12.337 22:55:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:14.253 22:55:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:14.513 22:55:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:14.513 22:55:42 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.513 22:55:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:14.513 22:55:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.513 22:55:42 -- common/autotest_common.sh@1187 -- # return 0 00:13:14.513 22:55:42 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.513 22:55:42 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.513 22:55:42 -- common/autotest_common.sh@1198 -- # local i=0 00:13:14.513 22:55:42 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:14.513 22:55:42 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.513 22:55:42 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:14.513 22:55:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.513 22:55:42 -- common/autotest_common.sh@1210 -- # return 0 00:13:14.513 22:55:42 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.513 22:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.513 22:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:14.513 22:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.513 22:55:42 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.513 22:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.513 22:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:14.513 22:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.513 22:55:42 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:14.513 22:55:42 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.513 22:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.513 22:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:14.513 22:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.513 22:55:42 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.513 22:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.513 22:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:14.513 [2024-06-09 22:55:42.636770] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.513 22:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.513 22:55:42 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:14.513 22:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.513 22:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:14.513 22:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.513 22:55:42 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.513 22:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.513 22:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:14.513 22:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.513 22:55:42 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.428 22:55:44 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.428 22:55:44 -- common/autotest_common.sh@1177 -- # local i=0 00:13:16.428 22:55:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.428 22:55:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:16.428 22:55:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:18.344 22:55:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:18.344 22:55:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:18.344 22:55:46 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.344 22:55:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:18.344 22:55:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.344 22:55:46 -- common/autotest_common.sh@1187 -- # return 0 00:13:18.344 22:55:46 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.344 22:55:46 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.344 22:55:46 -- common/autotest_common.sh@1198 -- # local i=0 00:13:18.344 22:55:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:18.344 22:55:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.344 22:55:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:18.344 22:55:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.344 22:55:46 -- common/autotest_common.sh@1210 -- # return 0 00:13:18.344 22:55:46 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.344 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.344 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.344 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.344 22:55:46 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.344 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.344 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.344 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.344 22:55:46 -- target/rpc.sh@99 -- # seq 1 5 00:13:18.344 22:55:46 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.344 22:55:46 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.344 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.344 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.344 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.344 22:55:46 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.344 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.344 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.344 [2024-06-09 22:55:46.326011] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.344 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.344 22:55:46 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.344 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.344 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.344 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.344 22:55:46 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.344 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.344 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.344 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.344 22:55:46 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.344 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.344 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.344 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.344 22:55:46 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.344 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.344 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.344 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.344 22:55:46 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.344 22:55:46 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.344 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.344 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.344 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.344 22:55:46 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 [2024-06-09 22:55:46.382124] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.345 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.345 22:55:46 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.345 22:55:46 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.345 22:55:46 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.345 22:55:46 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.345 22:55:46 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.345 22:55:46 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.345 22:55:46 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 [2024-06-09 22:55:46.442298] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.345 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.345 22:55:46 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.345 22:55:46 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.345 22:55:46 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.345 22:55:46 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.345 22:55:46 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.345 22:55:46 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.345 22:55:46 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 [2024-06-09 22:55:46.498497] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.345 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.345 22:55:46 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.345 22:55:46 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.345 22:55:46 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.345 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.345 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.606 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.606 22:55:46 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.606 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.606 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.606 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.606 22:55:46 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.606 22:55:46 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.606 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.606 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.606 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.606 22:55:46 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.606 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.606 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.606 [2024-06-09 22:55:46.554687] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.606 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.606 22:55:46 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.606 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.606 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.606 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.606 22:55:46 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.606 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.606 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.606 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.606 22:55:46 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.606 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.606 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.606 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.606 22:55:46 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.606 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.606 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.606 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.606 22:55:46 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:18.606 22:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.606 22:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:18.606 22:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.606 22:55:46 -- target/rpc.sh@110 -- # stats='{ 00:13:18.606 "tick_rate": 2400000000, 00:13:18.606 "poll_groups": [ 00:13:18.606 { 00:13:18.606 "name": "nvmf_tgt_poll_group_0", 00:13:18.606 "admin_qpairs": 0, 00:13:18.606 "io_qpairs": 224, 00:13:18.606 "current_admin_qpairs": 0, 00:13:18.606 "current_io_qpairs": 0, 00:13:18.606 "pending_bdev_io": 0, 00:13:18.606 "completed_nvme_io": 371, 00:13:18.606 "transports": [ 00:13:18.606 { 00:13:18.606 "trtype": "TCP" 00:13:18.606 } 00:13:18.606 ] 00:13:18.606 }, 00:13:18.606 { 00:13:18.606 "name": "nvmf_tgt_poll_group_1", 00:13:18.607 "admin_qpairs": 1, 00:13:18.607 "io_qpairs": 223, 00:13:18.607 "current_admin_qpairs": 0, 00:13:18.607 "current_io_qpairs": 0, 00:13:18.607 "pending_bdev_io": 0, 00:13:18.607 "completed_nvme_io": 277, 00:13:18.607 "transports": [ 00:13:18.607 { 00:13:18.607 "trtype": "TCP" 00:13:18.607 } 00:13:18.607 ] 00:13:18.607 }, 00:13:18.607 { 00:13:18.607 "name": "nvmf_tgt_poll_group_2", 00:13:18.607 "admin_qpairs": 6, 00:13:18.607 "io_qpairs": 218, 00:13:18.607 "current_admin_qpairs": 0, 00:13:18.607 "current_io_qpairs": 0, 00:13:18.607 "pending_bdev_io": 0, 00:13:18.607 "completed_nvme_io": 218, 00:13:18.607 "transports": [ 00:13:18.607 { 00:13:18.607 "trtype": "TCP" 00:13:18.607 } 00:13:18.607 ] 00:13:18.607 }, 00:13:18.607 { 00:13:18.607 "name": "nvmf_tgt_poll_group_3", 00:13:18.607 "admin_qpairs": 0, 00:13:18.607 "io_qpairs": 224, 00:13:18.607 "current_admin_qpairs": 0, 00:13:18.607 "current_io_qpairs": 0, 00:13:18.607 "pending_bdev_io": 0, 00:13:18.607 "completed_nvme_io": 373, 00:13:18.607 "transports": [ 00:13:18.607 { 00:13:18.607 "trtype": "TCP" 00:13:18.607 } 00:13:18.607 ] 00:13:18.607 } 00:13:18.607 ] 00:13:18.607 }' 00:13:18.607 22:55:46 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:18.607 22:55:46 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:18.607 22:55:46 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:18.607 22:55:46 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.607 22:55:46 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:18.607 22:55:46 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:18.607 22:55:46 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:18.607 22:55:46 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:18.607 22:55:46 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.607 22:55:46 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:18.607 22:55:46 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:18.607 22:55:46 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:18.607 22:55:46 -- target/rpc.sh@123 -- # nvmftestfini 00:13:18.607 22:55:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:18.607 22:55:46 -- nvmf/common.sh@116 -- # sync 00:13:18.607 22:55:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:18.607 22:55:46 -- nvmf/common.sh@119 -- # set +e 00:13:18.607 22:55:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:18.607 22:55:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:18.607 rmmod nvme_tcp 00:13:18.607 rmmod nvme_fabrics 00:13:18.607 rmmod nvme_keyring 00:13:18.607 22:55:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:18.607 22:55:46 -- nvmf/common.sh@123 -- # set -e 00:13:18.607 22:55:46 -- nvmf/common.sh@124 -- # return 0 00:13:18.607 22:55:46 -- nvmf/common.sh@477 -- # '[' -n 3991595 ']' 00:13:18.607 22:55:46 -- nvmf/common.sh@478 -- # killprocess 3991595 00:13:18.607 22:55:46 -- common/autotest_common.sh@926 -- # '[' -z 3991595 ']' 00:13:18.607 22:55:46 -- common/autotest_common.sh@930 -- # kill -0 3991595 00:13:18.607 22:55:46 -- common/autotest_common.sh@931 -- # uname 00:13:18.607 22:55:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:18.607 22:55:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3991595 00:13:18.867 22:55:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:18.867 22:55:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:18.867 22:55:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3991595' 00:13:18.867 killing process with pid 3991595 00:13:18.867 22:55:46 -- common/autotest_common.sh@945 -- # kill 3991595 00:13:18.868 22:55:46 -- common/autotest_common.sh@950 -- # wait 3991595 00:13:18.868 22:55:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:18.868 22:55:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:18.868 22:55:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:18.868 22:55:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.868 22:55:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:18.868 22:55:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.868 22:55:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.868 22:55:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.473 22:55:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:21.473 00:13:21.473 real 0m37.345s 00:13:21.473 user 1m53.186s 00:13:21.473 sys 0m7.152s 00:13:21.473 22:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:21.473 22:55:49 -- common/autotest_common.sh@10 -- # set +x 00:13:21.473 ************************************ 00:13:21.473 END TEST nvmf_rpc 00:13:21.473 ************************************ 00:13:21.473 22:55:49 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:21.473 22:55:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:21.473 22:55:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:21.473 22:55:49 -- common/autotest_common.sh@10 -- # set +x 00:13:21.473 ************************************ 00:13:21.473 START TEST nvmf_invalid 00:13:21.473 ************************************ 00:13:21.473 22:55:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:21.473 * Looking for test storage... 00:13:21.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.473 22:55:49 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.473 22:55:49 -- nvmf/common.sh@7 -- # uname -s 00:13:21.473 22:55:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.473 22:55:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.473 22:55:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.473 22:55:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.473 22:55:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.473 22:55:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.473 22:55:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.473 22:55:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.473 22:55:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.474 22:55:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.474 22:55:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:21.474 22:55:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:21.474 22:55:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.474 22:55:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.474 22:55:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.474 22:55:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.474 22:55:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.474 22:55:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.474 22:55:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.474 22:55:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.474 22:55:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.474 22:55:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.474 22:55:49 -- paths/export.sh@5 -- # export PATH 00:13:21.474 22:55:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.474 22:55:49 -- nvmf/common.sh@46 -- # : 0 00:13:21.474 22:55:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:21.474 22:55:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:21.474 22:55:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:21.474 22:55:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.474 22:55:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.474 22:55:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:21.474 22:55:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:21.474 22:55:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:21.474 22:55:49 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:21.474 22:55:49 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:21.474 22:55:49 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:21.474 22:55:49 -- target/invalid.sh@14 -- # target=foobar 00:13:21.474 22:55:49 -- target/invalid.sh@16 -- # RANDOM=0 00:13:21.474 22:55:49 -- target/invalid.sh@34 -- # nvmftestinit 00:13:21.474 22:55:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:21.474 22:55:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.474 22:55:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:21.474 22:55:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:21.474 22:55:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:21.474 22:55:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.474 22:55:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.474 22:55:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.474 22:55:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:21.474 22:55:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:21.474 22:55:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:21.474 22:55:49 -- common/autotest_common.sh@10 -- # set +x 00:13:28.066 22:55:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:28.066 22:55:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:28.066 22:55:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:28.066 22:55:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:28.066 22:55:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:28.066 22:55:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:28.067 22:55:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:28.067 22:55:55 -- nvmf/common.sh@294 -- # net_devs=() 00:13:28.067 22:55:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:28.067 22:55:55 -- nvmf/common.sh@295 -- # e810=() 00:13:28.067 22:55:55 -- nvmf/common.sh@295 -- # local -ga e810 00:13:28.067 22:55:55 -- nvmf/common.sh@296 -- # x722=() 00:13:28.067 22:55:55 -- nvmf/common.sh@296 -- # local -ga x722 00:13:28.067 22:55:55 -- nvmf/common.sh@297 -- # mlx=() 00:13:28.067 22:55:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:28.067 22:55:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.067 22:55:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.067 22:55:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.067 22:55:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.067 22:55:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.067 22:55:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.067 22:55:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.067 22:55:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.067 22:55:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.067 22:55:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.067 22:55:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.067 22:55:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:28.067 22:55:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:28.067 22:55:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:28.067 22:55:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:28.067 22:55:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:28.067 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:28.067 22:55:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:28.067 22:55:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:28.067 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:28.067 22:55:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:28.067 22:55:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:28.067 22:55:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.067 22:55:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:28.067 22:55:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.067 22:55:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:28.067 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:28.067 22:55:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.067 22:55:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:28.067 22:55:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.067 22:55:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:28.067 22:55:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.067 22:55:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:28.067 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:28.067 22:55:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.067 22:55:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:28.067 22:55:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:28.067 22:55:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:28.067 22:55:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:28.067 22:55:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.067 22:55:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.067 22:55:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.067 22:55:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:28.067 22:55:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.067 22:55:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.067 22:55:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:28.067 22:55:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.067 22:55:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.067 22:55:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:28.067 22:55:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:28.067 22:55:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.067 22:55:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.067 22:55:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.067 22:55:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.067 22:55:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:28.067 22:55:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.067 22:55:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.067 22:55:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.067 22:55:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:28.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:13:28.067 00:13:28.067 --- 10.0.0.2 ping statistics --- 00:13:28.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.067 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:13:28.067 22:55:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.428 ms 00:13:28.067 00:13:28.067 --- 10.0.0.1 ping statistics --- 00:13:28.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.067 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:13:28.067 22:55:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.067 22:55:56 -- nvmf/common.sh@410 -- # return 0 00:13:28.067 22:55:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:28.067 22:55:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.067 22:55:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:28.067 22:55:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:28.067 22:55:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.067 22:55:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:28.067 22:55:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:28.067 22:55:56 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:28.067 22:55:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:28.067 22:55:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:28.067 22:55:56 -- common/autotest_common.sh@10 -- # set +x 00:13:28.067 22:55:56 -- nvmf/common.sh@469 -- # nvmfpid=4001251 00:13:28.067 22:55:56 -- nvmf/common.sh@470 -- # waitforlisten 4001251 00:13:28.067 22:55:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:28.067 22:55:56 -- common/autotest_common.sh@819 -- # '[' -z 4001251 ']' 00:13:28.067 22:55:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.067 22:55:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:28.067 22:55:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.067 22:55:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:28.067 22:55:56 -- common/autotest_common.sh@10 -- # set +x 00:13:28.067 [2024-06-09 22:55:56.217607] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:28.067 [2024-06-09 22:55:56.217658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.329 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.329 [2024-06-09 22:55:56.284912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.329 [2024-06-09 22:55:56.348854] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:28.329 [2024-06-09 22:55:56.348990] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.329 [2024-06-09 22:55:56.349001] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.329 [2024-06-09 22:55:56.349009] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.329 [2024-06-09 22:55:56.349172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.329 [2024-06-09 22:55:56.349288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.329 [2024-06-09 22:55:56.349447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.329 [2024-06-09 22:55:56.349447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.902 22:55:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:28.902 22:55:56 -- common/autotest_common.sh@852 -- # return 0 00:13:28.902 22:55:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:28.902 22:55:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:28.902 22:55:56 -- common/autotest_common.sh@10 -- # set +x 00:13:28.902 22:55:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.902 22:55:57 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:28.902 22:55:57 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31791 00:13:29.164 [2024-06-09 22:55:57.163947] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:29.164 22:55:57 -- target/invalid.sh@40 -- # out='request: 00:13:29.164 { 00:13:29.164 "nqn": "nqn.2016-06.io.spdk:cnode31791", 00:13:29.164 "tgt_name": "foobar", 00:13:29.164 "method": "nvmf_create_subsystem", 00:13:29.164 "req_id": 1 00:13:29.164 } 00:13:29.164 Got JSON-RPC error response 00:13:29.164 response: 00:13:29.164 { 00:13:29.164 "code": -32603, 00:13:29.164 "message": "Unable to find target foobar" 00:13:29.164 }' 00:13:29.164 22:55:57 -- target/invalid.sh@41 -- # [[ request: 00:13:29.164 { 00:13:29.164 "nqn": "nqn.2016-06.io.spdk:cnode31791", 00:13:29.164 "tgt_name": "foobar", 00:13:29.164 "method": "nvmf_create_subsystem", 00:13:29.164 "req_id": 1 00:13:29.164 } 00:13:29.164 Got JSON-RPC error response 00:13:29.164 response: 00:13:29.164 { 00:13:29.164 "code": -32603, 00:13:29.164 "message": "Unable to find target foobar" 00:13:29.164 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:29.164 22:55:57 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:29.164 22:55:57 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14911 00:13:29.164 [2024-06-09 22:55:57.336547] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14911: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:29.425 22:55:57 -- target/invalid.sh@45 -- # out='request: 00:13:29.425 { 00:13:29.425 "nqn": "nqn.2016-06.io.spdk:cnode14911", 00:13:29.425 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:29.425 "method": "nvmf_create_subsystem", 00:13:29.425 "req_id": 1 00:13:29.425 } 00:13:29.425 Got JSON-RPC error response 00:13:29.425 response: 00:13:29.425 { 00:13:29.425 "code": -32602, 00:13:29.425 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:29.425 }' 00:13:29.425 22:55:57 -- target/invalid.sh@46 -- # [[ request: 00:13:29.425 { 00:13:29.425 "nqn": "nqn.2016-06.io.spdk:cnode14911", 00:13:29.425 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:29.425 "method": "nvmf_create_subsystem", 00:13:29.425 "req_id": 1 00:13:29.425 } 00:13:29.425 Got JSON-RPC error response 00:13:29.425 response: 00:13:29.425 { 00:13:29.425 "code": -32602, 00:13:29.426 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:29.426 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:29.426 22:55:57 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:29.426 22:55:57 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6566 00:13:29.426 [2024-06-09 22:55:57.509133] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6566: invalid model number 'SPDK_Controller' 00:13:29.426 22:55:57 -- target/invalid.sh@50 -- # out='request: 00:13:29.426 { 00:13:29.426 "nqn": "nqn.2016-06.io.spdk:cnode6566", 00:13:29.426 "model_number": "SPDK_Controller\u001f", 00:13:29.426 "method": "nvmf_create_subsystem", 00:13:29.426 "req_id": 1 00:13:29.426 } 00:13:29.426 Got JSON-RPC error response 00:13:29.426 response: 00:13:29.426 { 00:13:29.426 "code": -32602, 00:13:29.426 "message": "Invalid MN SPDK_Controller\u001f" 00:13:29.426 }' 00:13:29.426 22:55:57 -- target/invalid.sh@51 -- # [[ request: 00:13:29.426 { 00:13:29.426 "nqn": "nqn.2016-06.io.spdk:cnode6566", 00:13:29.426 "model_number": "SPDK_Controller\u001f", 00:13:29.426 "method": "nvmf_create_subsystem", 00:13:29.426 "req_id": 1 00:13:29.426 } 00:13:29.426 Got JSON-RPC error response 00:13:29.426 response: 00:13:29.426 { 00:13:29.426 "code": -32602, 00:13:29.426 "message": "Invalid MN SPDK_Controller\u001f" 00:13:29.426 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:29.426 22:55:57 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:29.426 22:55:57 -- target/invalid.sh@19 -- # local length=21 ll 00:13:29.426 22:55:57 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:29.426 22:55:57 -- target/invalid.sh@21 -- # local chars 00:13:29.426 22:55:57 -- target/invalid.sh@22 -- # local string 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # printf %x 35 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # string+='#' 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # printf %x 121 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # string+=y 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # printf %x 69 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # string+=E 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # printf %x 34 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # string+='"' 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # printf %x 56 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # string+=8 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # printf %x 95 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # string+=_ 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # printf %x 78 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # string+=N 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # printf %x 104 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:29.426 22:55:57 -- target/invalid.sh@25 -- # string+=h 00:13:29.426 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # printf %x 44 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # string+=, 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # printf %x 56 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # string+=8 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # printf %x 49 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # string+=1 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # printf %x 111 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # string+=o 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # printf %x 110 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # string+=n 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # printf %x 94 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # string+='^' 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # printf %x 124 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # string+='|' 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # printf %x 77 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # string+=M 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # printf %x 82 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # string+=R 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # printf %x 69 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # string+=E 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # printf %x 59 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # string+=';' 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # printf %x 103 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # string+=g 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # printf %x 98 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:29.688 22:55:57 -- target/invalid.sh@25 -- # string+=b 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.688 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.688 22:55:57 -- target/invalid.sh@28 -- # [[ # == \- ]] 00:13:29.688 22:55:57 -- target/invalid.sh@31 -- # echo '#yE"8_Nh,81on^|MRE;gb' 00:13:29.688 22:55:57 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '#yE"8_Nh,81on^|MRE;gb' nqn.2016-06.io.spdk:cnode29617 00:13:29.688 [2024-06-09 22:55:57.834190] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29617: invalid serial number '#yE"8_Nh,81on^|MRE;gb' 00:13:29.950 22:55:57 -- target/invalid.sh@54 -- # out='request: 00:13:29.950 { 00:13:29.950 "nqn": "nqn.2016-06.io.spdk:cnode29617", 00:13:29.950 "serial_number": "#yE\"8_Nh,81on^|MRE;gb", 00:13:29.950 "method": "nvmf_create_subsystem", 00:13:29.950 "req_id": 1 00:13:29.950 } 00:13:29.950 Got JSON-RPC error response 00:13:29.950 response: 00:13:29.950 { 00:13:29.950 "code": -32602, 00:13:29.950 "message": "Invalid SN #yE\"8_Nh,81on^|MRE;gb" 00:13:29.950 }' 00:13:29.950 22:55:57 -- target/invalid.sh@55 -- # [[ request: 00:13:29.950 { 00:13:29.950 "nqn": "nqn.2016-06.io.spdk:cnode29617", 00:13:29.950 "serial_number": "#yE\"8_Nh,81on^|MRE;gb", 00:13:29.950 "method": "nvmf_create_subsystem", 00:13:29.950 "req_id": 1 00:13:29.950 } 00:13:29.950 Got JSON-RPC error response 00:13:29.950 response: 00:13:29.950 { 00:13:29.950 "code": -32602, 00:13:29.950 "message": "Invalid SN #yE\"8_Nh,81on^|MRE;gb" 00:13:29.950 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:29.950 22:55:57 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:29.950 22:55:57 -- target/invalid.sh@19 -- # local length=41 ll 00:13:29.950 22:55:57 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:29.950 22:55:57 -- target/invalid.sh@21 -- # local chars 00:13:29.950 22:55:57 -- target/invalid.sh@22 -- # local string 00:13:29.950 22:55:57 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:29.950 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.950 22:55:57 -- target/invalid.sh@25 -- # printf %x 35 00:13:29.950 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:29.950 22:55:57 -- target/invalid.sh@25 -- # string+='#' 00:13:29.950 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.950 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.950 22:55:57 -- target/invalid.sh@25 -- # printf %x 120 00:13:29.950 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:29.950 22:55:57 -- target/invalid.sh@25 -- # string+=x 00:13:29.950 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.950 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.950 22:55:57 -- target/invalid.sh@25 -- # printf %x 98 00:13:29.950 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:29.950 22:55:57 -- target/invalid.sh@25 -- # string+=b 00:13:29.950 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.950 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.950 22:55:57 -- target/invalid.sh@25 -- # printf %x 107 00:13:29.950 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # string+=k 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # printf %x 103 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # string+=g 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # printf %x 88 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # string+=X 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # printf %x 46 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # string+=. 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # printf %x 41 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # string+=')' 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # printf %x 47 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # string+=/ 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # printf %x 34 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # string+='"' 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # printf %x 66 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # string+=B 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # printf %x 70 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # string+=F 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # printf %x 44 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # string+=, 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # printf %x 55 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # string+=7 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # printf %x 35 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # string+='#' 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # printf %x 39 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # string+=\' 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # printf %x 99 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # string+=c 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:57 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # printf %x 67 00:13:29.951 22:55:57 -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+=C 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 42 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+='*' 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 40 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+='(' 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 100 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+=d 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 106 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+=j 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 63 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+='?' 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 39 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+=\' 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 40 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+='(' 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 40 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+='(' 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 112 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+=p 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 105 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+=i 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 73 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+=I 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 126 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+='~' 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 107 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+=k 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 42 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+='*' 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 45 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+=- 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 106 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+=j 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # printf %x 37 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:29.951 22:55:58 -- target/invalid.sh@25 -- # string+=% 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.951 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # printf %x 46 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # string+=. 00:13:30.213 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.213 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # printf %x 64 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # string+=@ 00:13:30.213 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.213 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # printf %x 121 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # string+=y 00:13:30.213 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.213 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # printf %x 55 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # string+=7 00:13:30.213 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.213 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # printf %x 126 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # string+='~' 00:13:30.213 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.213 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # printf %x 65 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:30.213 22:55:58 -- target/invalid.sh@25 -- # string+=A 00:13:30.213 22:55:58 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.213 22:55:58 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.213 22:55:58 -- target/invalid.sh@28 -- # [[ # == \- ]] 00:13:30.213 22:55:58 -- target/invalid.sh@31 -- # echo '#xbkgX.)/"BF,7#'\''cC*(dj?'\''((piI~k*-j%.@y7~A' 00:13:30.213 22:55:58 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '#xbkgX.)/"BF,7#'\''cC*(dj?'\''((piI~k*-j%.@y7~A' nqn.2016-06.io.spdk:cnode11827 00:13:30.213 [2024-06-09 22:55:58.307757] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11827: invalid model number '#xbkgX.)/"BF,7#'cC*(dj?'((piI~k*-j%.@y7~A' 00:13:30.213 22:55:58 -- target/invalid.sh@58 -- # out='request: 00:13:30.213 { 00:13:30.213 "nqn": "nqn.2016-06.io.spdk:cnode11827", 00:13:30.213 "model_number": "#xbkgX.)/\"BF,7#'\''cC*(dj?'\''((piI~k*-j%.@y7~A", 00:13:30.213 "method": "nvmf_create_subsystem", 00:13:30.213 "req_id": 1 00:13:30.213 } 00:13:30.213 Got JSON-RPC error response 00:13:30.213 response: 00:13:30.213 { 00:13:30.213 "code": -32602, 00:13:30.213 "message": "Invalid MN #xbkgX.)/\"BF,7#'\''cC*(dj?'\''((piI~k*-j%.@y7~A" 00:13:30.213 }' 00:13:30.213 22:55:58 -- target/invalid.sh@59 -- # [[ request: 00:13:30.213 { 00:13:30.213 "nqn": "nqn.2016-06.io.spdk:cnode11827", 00:13:30.213 "model_number": "#xbkgX.)/\"BF,7#'cC*(dj?'((piI~k*-j%.@y7~A", 00:13:30.213 "method": "nvmf_create_subsystem", 00:13:30.213 "req_id": 1 00:13:30.213 } 00:13:30.213 Got JSON-RPC error response 00:13:30.213 response: 00:13:30.213 { 00:13:30.213 "code": -32602, 00:13:30.213 "message": "Invalid MN #xbkgX.)/\"BF,7#'cC*(dj?'((piI~k*-j%.@y7~A" 00:13:30.213 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:30.213 22:55:58 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:30.475 [2024-06-09 22:55:58.472355] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.475 22:55:58 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:30.736 22:55:58 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:30.736 22:55:58 -- target/invalid.sh@67 -- # echo '' 00:13:30.736 22:55:58 -- target/invalid.sh@67 -- # head -n 1 00:13:30.736 22:55:58 -- target/invalid.sh@67 -- # IP= 00:13:30.736 22:55:58 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:30.736 [2024-06-09 22:55:58.813495] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:30.736 22:55:58 -- target/invalid.sh@69 -- # out='request: 00:13:30.736 { 00:13:30.736 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:30.736 "listen_address": { 00:13:30.736 "trtype": "tcp", 00:13:30.736 "traddr": "", 00:13:30.736 "trsvcid": "4421" 00:13:30.736 }, 00:13:30.736 "method": "nvmf_subsystem_remove_listener", 00:13:30.736 "req_id": 1 00:13:30.736 } 00:13:30.736 Got JSON-RPC error response 00:13:30.736 response: 00:13:30.736 { 00:13:30.736 "code": -32602, 00:13:30.736 "message": "Invalid parameters" 00:13:30.736 }' 00:13:30.736 22:55:58 -- target/invalid.sh@70 -- # [[ request: 00:13:30.736 { 00:13:30.736 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:30.736 "listen_address": { 00:13:30.736 "trtype": "tcp", 00:13:30.737 "traddr": "", 00:13:30.737 "trsvcid": "4421" 00:13:30.737 }, 00:13:30.737 "method": "nvmf_subsystem_remove_listener", 00:13:30.737 "req_id": 1 00:13:30.737 } 00:13:30.737 Got JSON-RPC error response 00:13:30.737 response: 00:13:30.737 { 00:13:30.737 "code": -32602, 00:13:30.737 "message": "Invalid parameters" 00:13:30.737 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:30.737 22:55:58 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5908 -i 0 00:13:30.998 [2024-06-09 22:55:58.982021] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5908: invalid cntlid range [0-65519] 00:13:30.998 22:55:59 -- target/invalid.sh@73 -- # out='request: 00:13:30.998 { 00:13:30.998 "nqn": "nqn.2016-06.io.spdk:cnode5908", 00:13:30.998 "min_cntlid": 0, 00:13:30.998 "method": "nvmf_create_subsystem", 00:13:30.998 "req_id": 1 00:13:30.998 } 00:13:30.998 Got JSON-RPC error response 00:13:30.998 response: 00:13:30.998 { 00:13:30.998 "code": -32602, 00:13:30.998 "message": "Invalid cntlid range [0-65519]" 00:13:30.998 }' 00:13:30.998 22:55:59 -- target/invalid.sh@74 -- # [[ request: 00:13:30.998 { 00:13:30.998 "nqn": "nqn.2016-06.io.spdk:cnode5908", 00:13:30.998 "min_cntlid": 0, 00:13:30.998 "method": "nvmf_create_subsystem", 00:13:30.998 "req_id": 1 00:13:30.998 } 00:13:30.998 Got JSON-RPC error response 00:13:30.998 response: 00:13:30.998 { 00:13:30.998 "code": -32602, 00:13:30.998 "message": "Invalid cntlid range [0-65519]" 00:13:30.998 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:30.998 22:55:59 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14597 -i 65520 00:13:30.998 [2024-06-09 22:55:59.150602] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14597: invalid cntlid range [65520-65519] 00:13:31.259 22:55:59 -- target/invalid.sh@75 -- # out='request: 00:13:31.259 { 00:13:31.259 "nqn": "nqn.2016-06.io.spdk:cnode14597", 00:13:31.259 "min_cntlid": 65520, 00:13:31.259 "method": "nvmf_create_subsystem", 00:13:31.259 "req_id": 1 00:13:31.259 } 00:13:31.259 Got JSON-RPC error response 00:13:31.259 response: 00:13:31.259 { 00:13:31.259 "code": -32602, 00:13:31.259 "message": "Invalid cntlid range [65520-65519]" 00:13:31.259 }' 00:13:31.259 22:55:59 -- target/invalid.sh@76 -- # [[ request: 00:13:31.259 { 00:13:31.259 "nqn": "nqn.2016-06.io.spdk:cnode14597", 00:13:31.259 "min_cntlid": 65520, 00:13:31.259 "method": "nvmf_create_subsystem", 00:13:31.259 "req_id": 1 00:13:31.259 } 00:13:31.259 Got JSON-RPC error response 00:13:31.259 response: 00:13:31.259 { 00:13:31.259 "code": -32602, 00:13:31.259 "message": "Invalid cntlid range [65520-65519]" 00:13:31.259 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:31.259 22:55:59 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27423 -I 0 00:13:31.259 [2024-06-09 22:55:59.311114] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27423: invalid cntlid range [1-0] 00:13:31.259 22:55:59 -- target/invalid.sh@77 -- # out='request: 00:13:31.260 { 00:13:31.260 "nqn": "nqn.2016-06.io.spdk:cnode27423", 00:13:31.260 "max_cntlid": 0, 00:13:31.260 "method": "nvmf_create_subsystem", 00:13:31.260 "req_id": 1 00:13:31.260 } 00:13:31.260 Got JSON-RPC error response 00:13:31.260 response: 00:13:31.260 { 00:13:31.260 "code": -32602, 00:13:31.260 "message": "Invalid cntlid range [1-0]" 00:13:31.260 }' 00:13:31.260 22:55:59 -- target/invalid.sh@78 -- # [[ request: 00:13:31.260 { 00:13:31.260 "nqn": "nqn.2016-06.io.spdk:cnode27423", 00:13:31.260 "max_cntlid": 0, 00:13:31.260 "method": "nvmf_create_subsystem", 00:13:31.260 "req_id": 1 00:13:31.260 } 00:13:31.260 Got JSON-RPC error response 00:13:31.260 response: 00:13:31.260 { 00:13:31.260 "code": -32602, 00:13:31.260 "message": "Invalid cntlid range [1-0]" 00:13:31.260 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:31.260 22:55:59 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19006 -I 65520 00:13:31.521 [2024-06-09 22:55:59.471648] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19006: invalid cntlid range [1-65520] 00:13:31.521 22:55:59 -- target/invalid.sh@79 -- # out='request: 00:13:31.521 { 00:13:31.521 "nqn": "nqn.2016-06.io.spdk:cnode19006", 00:13:31.521 "max_cntlid": 65520, 00:13:31.521 "method": "nvmf_create_subsystem", 00:13:31.521 "req_id": 1 00:13:31.521 } 00:13:31.521 Got JSON-RPC error response 00:13:31.521 response: 00:13:31.521 { 00:13:31.521 "code": -32602, 00:13:31.521 "message": "Invalid cntlid range [1-65520]" 00:13:31.521 }' 00:13:31.521 22:55:59 -- target/invalid.sh@80 -- # [[ request: 00:13:31.521 { 00:13:31.521 "nqn": "nqn.2016-06.io.spdk:cnode19006", 00:13:31.521 "max_cntlid": 65520, 00:13:31.521 "method": "nvmf_create_subsystem", 00:13:31.521 "req_id": 1 00:13:31.521 } 00:13:31.521 Got JSON-RPC error response 00:13:31.521 response: 00:13:31.521 { 00:13:31.521 "code": -32602, 00:13:31.521 "message": "Invalid cntlid range [1-65520]" 00:13:31.521 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:31.521 22:55:59 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20852 -i 6 -I 5 00:13:31.521 [2024-06-09 22:55:59.640222] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20852: invalid cntlid range [6-5] 00:13:31.521 22:55:59 -- target/invalid.sh@83 -- # out='request: 00:13:31.521 { 00:13:31.521 "nqn": "nqn.2016-06.io.spdk:cnode20852", 00:13:31.521 "min_cntlid": 6, 00:13:31.521 "max_cntlid": 5, 00:13:31.521 "method": "nvmf_create_subsystem", 00:13:31.521 "req_id": 1 00:13:31.521 } 00:13:31.521 Got JSON-RPC error response 00:13:31.521 response: 00:13:31.521 { 00:13:31.521 "code": -32602, 00:13:31.521 "message": "Invalid cntlid range [6-5]" 00:13:31.521 }' 00:13:31.521 22:55:59 -- target/invalid.sh@84 -- # [[ request: 00:13:31.521 { 00:13:31.521 "nqn": "nqn.2016-06.io.spdk:cnode20852", 00:13:31.521 "min_cntlid": 6, 00:13:31.521 "max_cntlid": 5, 00:13:31.521 "method": "nvmf_create_subsystem", 00:13:31.521 "req_id": 1 00:13:31.521 } 00:13:31.521 Got JSON-RPC error response 00:13:31.521 response: 00:13:31.521 { 00:13:31.521 "code": -32602, 00:13:31.521 "message": "Invalid cntlid range [6-5]" 00:13:31.521 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:31.521 22:55:59 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:31.782 22:55:59 -- target/invalid.sh@87 -- # out='request: 00:13:31.782 { 00:13:31.782 "name": "foobar", 00:13:31.782 "method": "nvmf_delete_target", 00:13:31.782 "req_id": 1 00:13:31.782 } 00:13:31.782 Got JSON-RPC error response 00:13:31.782 response: 00:13:31.782 { 00:13:31.782 "code": -32602, 00:13:31.782 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:31.782 }' 00:13:31.782 22:55:59 -- target/invalid.sh@88 -- # [[ request: 00:13:31.782 { 00:13:31.782 "name": "foobar", 00:13:31.782 "method": "nvmf_delete_target", 00:13:31.782 "req_id": 1 00:13:31.782 } 00:13:31.782 Got JSON-RPC error response 00:13:31.782 response: 00:13:31.782 { 00:13:31.782 "code": -32602, 00:13:31.782 "message": "The specified target doesn't exist, cannot delete it." 00:13:31.782 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:31.782 22:55:59 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:31.782 22:55:59 -- target/invalid.sh@91 -- # nvmftestfini 00:13:31.782 22:55:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:31.782 22:55:59 -- nvmf/common.sh@116 -- # sync 00:13:31.782 22:55:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:31.782 22:55:59 -- nvmf/common.sh@119 -- # set +e 00:13:31.782 22:55:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:31.782 22:55:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:31.782 rmmod nvme_tcp 00:13:31.782 rmmod nvme_fabrics 00:13:31.782 rmmod nvme_keyring 00:13:31.782 22:55:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:31.782 22:55:59 -- nvmf/common.sh@123 -- # set -e 00:13:31.782 22:55:59 -- nvmf/common.sh@124 -- # return 0 00:13:31.782 22:55:59 -- nvmf/common.sh@477 -- # '[' -n 4001251 ']' 00:13:31.782 22:55:59 -- nvmf/common.sh@478 -- # killprocess 4001251 00:13:31.782 22:55:59 -- common/autotest_common.sh@926 -- # '[' -z 4001251 ']' 00:13:31.782 22:55:59 -- common/autotest_common.sh@930 -- # kill -0 4001251 00:13:31.782 22:55:59 -- common/autotest_common.sh@931 -- # uname 00:13:31.782 22:55:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:31.782 22:55:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4001251 00:13:31.782 22:55:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:31.782 22:55:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:31.782 22:55:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4001251' 00:13:31.782 killing process with pid 4001251 00:13:31.782 22:55:59 -- common/autotest_common.sh@945 -- # kill 4001251 00:13:31.782 22:55:59 -- common/autotest_common.sh@950 -- # wait 4001251 00:13:32.045 22:56:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:32.045 22:56:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:32.045 22:56:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:32.045 22:56:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:32.045 22:56:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:32.045 22:56:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.045 22:56:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.045 22:56:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.970 22:56:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:33.970 00:13:33.970 real 0m13.032s 00:13:33.970 user 0m18.764s 00:13:33.970 sys 0m6.071s 00:13:33.970 22:56:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.970 22:56:02 -- common/autotest_common.sh@10 -- # set +x 00:13:33.970 ************************************ 00:13:33.970 END TEST nvmf_invalid 00:13:33.970 ************************************ 00:13:34.231 22:56:02 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:34.231 22:56:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:34.231 22:56:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:34.232 22:56:02 -- common/autotest_common.sh@10 -- # set +x 00:13:34.232 ************************************ 00:13:34.232 START TEST nvmf_abort 00:13:34.232 ************************************ 00:13:34.232 22:56:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:34.232 * Looking for test storage... 00:13:34.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:34.232 22:56:02 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:34.232 22:56:02 -- nvmf/common.sh@7 -- # uname -s 00:13:34.232 22:56:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.232 22:56:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.232 22:56:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.232 22:56:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.232 22:56:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.232 22:56:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.232 22:56:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.232 22:56:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.232 22:56:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.232 22:56:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.232 22:56:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:34.232 22:56:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:34.232 22:56:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.232 22:56:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.232 22:56:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:34.232 22:56:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:34.232 22:56:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.232 22:56:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.232 22:56:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.232 22:56:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.232 22:56:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.232 22:56:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.232 22:56:02 -- paths/export.sh@5 -- # export PATH 00:13:34.232 22:56:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.232 22:56:02 -- nvmf/common.sh@46 -- # : 0 00:13:34.232 22:56:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:34.232 22:56:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:34.232 22:56:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:34.232 22:56:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.232 22:56:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.232 22:56:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:34.232 22:56:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:34.232 22:56:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:34.232 22:56:02 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:34.232 22:56:02 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:34.232 22:56:02 -- target/abort.sh@14 -- # nvmftestinit 00:13:34.232 22:56:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:34.232 22:56:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.232 22:56:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:34.232 22:56:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:34.232 22:56:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:34.232 22:56:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.232 22:56:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.232 22:56:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.232 22:56:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:34.232 22:56:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:34.232 22:56:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:34.232 22:56:02 -- common/autotest_common.sh@10 -- # set +x 00:13:42.378 22:56:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:42.378 22:56:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:42.378 22:56:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:42.378 22:56:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:42.378 22:56:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:42.378 22:56:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:42.378 22:56:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:42.378 22:56:09 -- nvmf/common.sh@294 -- # net_devs=() 00:13:42.378 22:56:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:42.378 22:56:09 -- nvmf/common.sh@295 -- # e810=() 00:13:42.378 22:56:09 -- nvmf/common.sh@295 -- # local -ga e810 00:13:42.378 22:56:09 -- nvmf/common.sh@296 -- # x722=() 00:13:42.378 22:56:09 -- nvmf/common.sh@296 -- # local -ga x722 00:13:42.378 22:56:09 -- nvmf/common.sh@297 -- # mlx=() 00:13:42.378 22:56:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:42.378 22:56:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:42.378 22:56:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:42.378 22:56:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:42.378 22:56:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:42.378 22:56:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:42.378 22:56:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:42.378 22:56:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:42.378 22:56:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:42.378 22:56:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:42.378 22:56:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:42.378 22:56:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:42.378 22:56:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:42.378 22:56:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:42.378 22:56:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:42.378 22:56:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:42.378 22:56:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:42.378 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:42.378 22:56:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:42.378 22:56:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:42.378 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:42.378 22:56:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:42.378 22:56:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:42.378 22:56:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.378 22:56:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:42.378 22:56:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.378 22:56:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:42.378 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:42.378 22:56:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.378 22:56:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:42.378 22:56:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:42.378 22:56:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:42.378 22:56:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:42.378 22:56:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:42.378 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:42.378 22:56:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:42.378 22:56:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:42.378 22:56:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:42.378 22:56:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:42.378 22:56:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:42.378 22:56:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:42.378 22:56:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:42.378 22:56:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:42.378 22:56:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:42.378 22:56:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:42.378 22:56:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:42.378 22:56:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:42.378 22:56:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:42.378 22:56:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:42.378 22:56:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:42.378 22:56:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:42.378 22:56:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:42.378 22:56:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:42.378 22:56:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:42.378 22:56:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:42.378 22:56:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:42.378 22:56:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:42.378 22:56:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:42.378 22:56:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:42.378 22:56:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:42.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:42.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:13:42.378 00:13:42.378 --- 10.0.0.2 ping statistics --- 00:13:42.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.378 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:13:42.378 22:56:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:42.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:42.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:13:42.378 00:13:42.378 --- 10.0.0.1 ping statistics --- 00:13:42.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:42.378 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:13:42.378 22:56:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:42.378 22:56:09 -- nvmf/common.sh@410 -- # return 0 00:13:42.378 22:56:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:42.378 22:56:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:42.379 22:56:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:42.379 22:56:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:42.379 22:56:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:42.379 22:56:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:42.379 22:56:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:42.379 22:56:09 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:42.379 22:56:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:42.379 22:56:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:42.379 22:56:09 -- common/autotest_common.sh@10 -- # set +x 00:13:42.379 22:56:09 -- nvmf/common.sh@469 -- # nvmfpid=4006448 00:13:42.379 22:56:09 -- nvmf/common.sh@470 -- # waitforlisten 4006448 00:13:42.379 22:56:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:42.379 22:56:09 -- common/autotest_common.sh@819 -- # '[' -z 4006448 ']' 00:13:42.379 22:56:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.379 22:56:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:42.379 22:56:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.379 22:56:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:42.379 22:56:09 -- common/autotest_common.sh@10 -- # set +x 00:13:42.379 [2024-06-09 22:56:09.578584] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:42.379 [2024-06-09 22:56:09.578646] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:42.379 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.379 [2024-06-09 22:56:09.648483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:42.379 [2024-06-09 22:56:09.719992] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:42.379 [2024-06-09 22:56:09.720112] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:42.379 [2024-06-09 22:56:09.720120] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:42.379 [2024-06-09 22:56:09.720128] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:42.379 [2024-06-09 22:56:09.720257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.379 [2024-06-09 22:56:09.720393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.379 [2024-06-09 22:56:09.720394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.379 22:56:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:42.379 22:56:10 -- common/autotest_common.sh@852 -- # return 0 00:13:42.379 22:56:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:42.379 22:56:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:42.379 22:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:42.379 22:56:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.379 22:56:10 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:42.379 22:56:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.379 22:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:42.379 [2024-06-09 22:56:10.380216] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.379 22:56:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.379 22:56:10 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:42.379 22:56:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.379 22:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:42.379 Malloc0 00:13:42.379 22:56:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.379 22:56:10 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:42.379 22:56:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.379 22:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:42.379 Delay0 00:13:42.379 22:56:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.379 22:56:10 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:42.379 22:56:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.379 22:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:42.379 22:56:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.379 22:56:10 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:42.379 22:56:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.379 22:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:42.379 22:56:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.379 22:56:10 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:42.379 22:56:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.379 22:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:42.379 [2024-06-09 22:56:10.456683] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.379 22:56:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.379 22:56:10 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:42.379 22:56:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.379 22:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:42.379 22:56:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.379 22:56:10 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:42.379 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.640 [2024-06-09 22:56:10.577238] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:44.579 Initializing NVMe Controllers 00:13:44.579 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:44.579 controller IO queue size 128 less than required 00:13:44.579 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:44.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:44.579 Initialization complete. Launching workers. 00:13:44.579 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 26846 00:13:44.579 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26910, failed to submit 62 00:13:44.579 success 26846, unsuccess 64, failed 0 00:13:44.579 22:56:12 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:44.579 22:56:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.579 22:56:12 -- common/autotest_common.sh@10 -- # set +x 00:13:44.579 22:56:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.579 22:56:12 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:44.579 22:56:12 -- target/abort.sh@38 -- # nvmftestfini 00:13:44.579 22:56:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:44.579 22:56:12 -- nvmf/common.sh@116 -- # sync 00:13:44.579 22:56:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:44.579 22:56:12 -- nvmf/common.sh@119 -- # set +e 00:13:44.579 22:56:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:44.579 22:56:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:44.579 rmmod nvme_tcp 00:13:44.579 rmmod nvme_fabrics 00:13:44.579 rmmod nvme_keyring 00:13:44.579 22:56:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:44.579 22:56:12 -- nvmf/common.sh@123 -- # set -e 00:13:44.579 22:56:12 -- nvmf/common.sh@124 -- # return 0 00:13:44.579 22:56:12 -- nvmf/common.sh@477 -- # '[' -n 4006448 ']' 00:13:44.579 22:56:12 -- nvmf/common.sh@478 -- # killprocess 4006448 00:13:44.579 22:56:12 -- common/autotest_common.sh@926 -- # '[' -z 4006448 ']' 00:13:44.579 22:56:12 -- common/autotest_common.sh@930 -- # kill -0 4006448 00:13:44.579 22:56:12 -- common/autotest_common.sh@931 -- # uname 00:13:44.579 22:56:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:44.579 22:56:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4006448 00:13:44.872 22:56:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:44.872 22:56:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:44.873 22:56:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4006448' 00:13:44.873 killing process with pid 4006448 00:13:44.873 22:56:12 -- common/autotest_common.sh@945 -- # kill 4006448 00:13:44.873 22:56:12 -- common/autotest_common.sh@950 -- # wait 4006448 00:13:44.873 22:56:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:44.873 22:56:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:44.873 22:56:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:44.873 22:56:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.873 22:56:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:44.873 22:56:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.873 22:56:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.873 22:56:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.421 22:56:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:47.421 00:13:47.421 real 0m12.831s 00:13:47.421 user 0m13.222s 00:13:47.421 sys 0m6.332s 00:13:47.421 22:56:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.421 22:56:14 -- common/autotest_common.sh@10 -- # set +x 00:13:47.421 ************************************ 00:13:47.421 END TEST nvmf_abort 00:13:47.421 ************************************ 00:13:47.421 22:56:15 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:47.421 22:56:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:47.421 22:56:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:47.421 22:56:15 -- common/autotest_common.sh@10 -- # set +x 00:13:47.421 ************************************ 00:13:47.421 START TEST nvmf_ns_hotplug_stress 00:13:47.421 ************************************ 00:13:47.421 22:56:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:47.421 * Looking for test storage... 00:13:47.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:47.421 22:56:15 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.421 22:56:15 -- nvmf/common.sh@7 -- # uname -s 00:13:47.421 22:56:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.421 22:56:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.421 22:56:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.421 22:56:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.421 22:56:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.421 22:56:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.421 22:56:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.421 22:56:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.421 22:56:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.421 22:56:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.421 22:56:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:47.421 22:56:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:47.421 22:56:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.421 22:56:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.421 22:56:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.421 22:56:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:47.421 22:56:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.421 22:56:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.421 22:56:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.421 22:56:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.421 22:56:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.422 22:56:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.422 22:56:15 -- paths/export.sh@5 -- # export PATH 00:13:47.422 22:56:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.422 22:56:15 -- nvmf/common.sh@46 -- # : 0 00:13:47.422 22:56:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:47.422 22:56:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:47.422 22:56:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:47.422 22:56:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.422 22:56:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.422 22:56:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:47.422 22:56:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:47.422 22:56:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:47.422 22:56:15 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.422 22:56:15 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:47.422 22:56:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:47.422 22:56:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.422 22:56:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:47.422 22:56:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:47.422 22:56:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:47.422 22:56:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.422 22:56:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.422 22:56:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.422 22:56:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:47.422 22:56:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:47.422 22:56:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:47.422 22:56:15 -- common/autotest_common.sh@10 -- # set +x 00:13:54.014 22:56:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:54.015 22:56:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:54.015 22:56:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:54.015 22:56:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:54.015 22:56:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:54.015 22:56:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:54.015 22:56:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:54.015 22:56:21 -- nvmf/common.sh@294 -- # net_devs=() 00:13:54.015 22:56:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:54.015 22:56:21 -- nvmf/common.sh@295 -- # e810=() 00:13:54.015 22:56:21 -- nvmf/common.sh@295 -- # local -ga e810 00:13:54.015 22:56:21 -- nvmf/common.sh@296 -- # x722=() 00:13:54.015 22:56:21 -- nvmf/common.sh@296 -- # local -ga x722 00:13:54.015 22:56:21 -- nvmf/common.sh@297 -- # mlx=() 00:13:54.015 22:56:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:54.015 22:56:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.015 22:56:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.015 22:56:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.015 22:56:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.015 22:56:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.015 22:56:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.015 22:56:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.015 22:56:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.015 22:56:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.015 22:56:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.015 22:56:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.015 22:56:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:54.015 22:56:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:54.015 22:56:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:54.015 22:56:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:54.015 22:56:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:54.015 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:54.015 22:56:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:54.015 22:56:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:54.015 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:54.015 22:56:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:54.015 22:56:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:54.015 22:56:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.015 22:56:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:54.015 22:56:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.015 22:56:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:54.015 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:54.015 22:56:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.015 22:56:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:54.015 22:56:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.015 22:56:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:54.015 22:56:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.015 22:56:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:54.015 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:54.015 22:56:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.015 22:56:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:54.015 22:56:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:54.015 22:56:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:54.015 22:56:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:54.015 22:56:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.015 22:56:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.015 22:56:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:54.015 22:56:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:54.015 22:56:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:54.015 22:56:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:54.015 22:56:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:54.015 22:56:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:54.015 22:56:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.015 22:56:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:54.015 22:56:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:54.015 22:56:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:54.015 22:56:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.015 22:56:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.015 22:56:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.015 22:56:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:54.015 22:56:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.277 22:56:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.277 22:56:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.277 22:56:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:54.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.761 ms 00:13:54.277 00:13:54.277 --- 10.0.0.2 ping statistics --- 00:13:54.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.277 rtt min/avg/max/mdev = 0.761/0.761/0.761/0.000 ms 00:13:54.277 22:56:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.441 ms 00:13:54.277 00:13:54.277 --- 10.0.0.1 ping statistics --- 00:13:54.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.277 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:13:54.277 22:56:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.277 22:56:22 -- nvmf/common.sh@410 -- # return 0 00:13:54.277 22:56:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:54.277 22:56:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.277 22:56:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:54.277 22:56:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:54.277 22:56:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.277 22:56:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:54.277 22:56:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:54.277 22:56:22 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:54.277 22:56:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:54.277 22:56:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:54.277 22:56:22 -- common/autotest_common.sh@10 -- # set +x 00:13:54.277 22:56:22 -- nvmf/common.sh@469 -- # nvmfpid=4011181 00:13:54.277 22:56:22 -- nvmf/common.sh@470 -- # waitforlisten 4011181 00:13:54.277 22:56:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:54.277 22:56:22 -- common/autotest_common.sh@819 -- # '[' -z 4011181 ']' 00:13:54.277 22:56:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.277 22:56:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:54.277 22:56:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.277 22:56:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:54.277 22:56:22 -- common/autotest_common.sh@10 -- # set +x 00:13:54.277 [2024-06-09 22:56:22.395954] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:54.277 [2024-06-09 22:56:22.396020] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.277 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.539 [2024-06-09 22:56:22.467841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:54.539 [2024-06-09 22:56:22.540543] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:54.539 [2024-06-09 22:56:22.540663] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.539 [2024-06-09 22:56:22.540671] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.539 [2024-06-09 22:56:22.540679] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.539 [2024-06-09 22:56:22.540802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.539 [2024-06-09 22:56:22.540960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.539 [2024-06-09 22:56:22.540962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:55.110 22:56:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:55.110 22:56:23 -- common/autotest_common.sh@852 -- # return 0 00:13:55.110 22:56:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:55.110 22:56:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:55.110 22:56:23 -- common/autotest_common.sh@10 -- # set +x 00:13:55.110 22:56:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.110 22:56:23 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:55.110 22:56:23 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:55.371 [2024-06-09 22:56:23.341377] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.371 22:56:23 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:55.371 22:56:23 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.632 [2024-06-09 22:56:23.670804] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.632 22:56:23 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:55.892 22:56:23 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:55.892 Malloc0 00:13:55.892 22:56:24 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:56.152 Delay0 00:13:56.152 22:56:24 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.412 22:56:24 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:56.412 NULL1 00:13:56.412 22:56:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:56.673 22:56:24 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4011790 00:13:56.673 22:56:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:13:56.673 22:56:24 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:56.673 22:56:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.673 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.934 22:56:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.934 22:56:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:56.934 22:56:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:57.195 [2024-06-09 22:56:25.153338] bdev.c:4968:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:13:57.195 true 00:13:57.195 22:56:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:13:57.195 22:56:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.195 22:56:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.456 22:56:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:57.456 22:56:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:57.717 true 00:13:57.717 22:56:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:13:57.717 22:56:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.717 22:56:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.977 22:56:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:57.978 22:56:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:57.978 true 00:13:58.238 22:56:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:13:58.238 22:56:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.238 22:56:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.499 22:56:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:58.499 22:56:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:58.499 true 00:13:58.499 22:56:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:13:58.499 22:56:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.760 22:56:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.020 22:56:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:59.020 22:56:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:59.020 true 00:13:59.020 22:56:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:13:59.020 22:56:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.281 22:56:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.542 22:56:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:59.542 22:56:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:59.542 true 00:13:59.542 22:56:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:13:59.542 22:56:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.803 22:56:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.064 22:56:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:00.064 22:56:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:00.064 true 00:14:00.064 22:56:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:00.064 22:56:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.325 22:56:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.325 22:56:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:00.325 22:56:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:00.585 true 00:14:00.585 22:56:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:00.585 22:56:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.846 22:56:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.846 22:56:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:00.846 22:56:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:01.106 true 00:14:01.106 22:56:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:01.106 22:56:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.368 22:56:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.368 22:56:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:01.368 22:56:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:01.629 true 00:14:01.629 22:56:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:01.629 22:56:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.629 22:56:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.891 22:56:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:01.891 22:56:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:02.151 true 00:14:02.151 22:56:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:02.151 22:56:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.151 22:56:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.412 22:56:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:02.412 22:56:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:02.673 true 00:14:02.673 22:56:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:02.673 22:56:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.673 22:56:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.934 22:56:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:02.934 22:56:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:02.934 true 00:14:03.196 22:56:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:03.196 22:56:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.196 22:56:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.457 22:56:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:03.457 22:56:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:03.457 true 00:14:03.787 22:56:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:03.787 22:56:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.787 22:56:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.062 22:56:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:04.062 22:56:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:04.062 true 00:14:04.062 22:56:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:04.062 22:56:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.334 22:56:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.334 22:56:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:04.334 22:56:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:04.594 true 00:14:04.594 22:56:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:04.594 22:56:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.594 22:56:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.853 22:56:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:04.854 22:56:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:05.114 true 00:14:05.114 22:56:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:05.114 22:56:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.114 22:56:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.374 22:56:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:05.375 22:56:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:05.635 true 00:14:05.635 22:56:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:05.635 22:56:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.635 22:56:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.896 22:56:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:05.896 22:56:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:05.896 true 00:14:06.157 22:56:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:06.157 22:56:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.157 22:56:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.418 22:56:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:06.418 22:56:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:06.418 true 00:14:06.418 22:56:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:06.418 22:56:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.679 22:56:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.940 22:56:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:06.940 22:56:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:06.940 true 00:14:06.940 22:56:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:06.940 22:56:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.202 22:56:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.464 22:56:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:07.464 22:56:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:07.464 true 00:14:07.464 22:56:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:07.464 22:56:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.725 22:56:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.725 22:56:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:07.725 22:56:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:07.986 true 00:14:07.986 22:56:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:07.986 22:56:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.248 22:56:36 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.248 22:56:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:08.248 22:56:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:08.509 true 00:14:08.509 22:56:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:08.509 22:56:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.770 22:56:36 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.770 22:56:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:08.770 22:56:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:09.031 true 00:14:09.031 22:56:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:09.031 22:56:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.293 22:56:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.293 22:56:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:09.293 22:56:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:09.554 true 00:14:09.554 22:56:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:09.554 22:56:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.554 22:56:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.816 22:56:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:09.816 22:56:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:10.078 true 00:14:10.078 22:56:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:10.078 22:56:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.078 22:56:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.339 22:56:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:10.339 22:56:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:10.339 true 00:14:10.600 22:56:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:10.600 22:56:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.600 22:56:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.861 22:56:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:10.861 22:56:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:10.861 true 00:14:10.861 22:56:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:10.861 22:56:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.122 22:56:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.384 22:56:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:11.384 22:56:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:11.384 true 00:14:11.384 22:56:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:11.384 22:56:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.645 22:56:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.645 22:56:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:11.645 22:56:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:11.906 true 00:14:11.906 22:56:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:11.906 22:56:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.166 22:56:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.166 22:56:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:12.166 22:56:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:12.426 true 00:14:12.426 22:56:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:12.426 22:56:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.686 22:56:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.686 22:56:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:12.686 22:56:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:12.947 true 00:14:12.947 22:56:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:12.947 22:56:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.947 22:56:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.208 22:56:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:13.208 22:56:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:13.468 true 00:14:13.468 22:56:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:13.468 22:56:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.468 22:56:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.729 22:56:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:13.729 22:56:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:13.989 true 00:14:13.989 22:56:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:13.989 22:56:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.989 22:56:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.249 22:56:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:14.249 22:56:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:14.249 true 00:14:14.508 22:56:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:14.508 22:56:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.508 22:56:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.769 22:56:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:14.769 22:56:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:14.769 true 00:14:14.769 22:56:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:14.769 22:56:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.029 22:56:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.290 22:56:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:15.290 22:56:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:15.290 true 00:14:15.290 22:56:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:15.290 22:56:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.551 22:56:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.551 22:56:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:15.551 22:56:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:15.811 true 00:14:15.811 22:56:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:15.811 22:56:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.071 22:56:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.071 22:56:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:14:16.071 22:56:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:16.332 true 00:14:16.332 22:56:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:16.332 22:56:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.593 22:56:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.593 22:56:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:14:16.593 22:56:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:16.855 true 00:14:16.855 22:56:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:16.855 22:56:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.116 22:56:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.116 22:56:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:14:17.116 22:56:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:17.385 true 00:14:17.385 22:56:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:17.385 22:56:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.385 22:56:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.662 22:56:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:14:17.662 22:56:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:17.923 true 00:14:17.923 22:56:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:17.923 22:56:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.923 22:56:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.184 22:56:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:14:18.184 22:56:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:18.184 true 00:14:18.446 22:56:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:18.446 22:56:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.446 22:56:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.706 22:56:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:14:18.706 22:56:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:18.706 true 00:14:18.706 22:56:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:18.706 22:56:46 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.967 22:56:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.228 22:56:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:14:19.228 22:56:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:19.228 true 00:14:19.228 22:56:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:19.228 22:56:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.489 22:56:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:19.750 22:56:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:14:19.750 22:56:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:19.750 true 00:14:19.750 22:56:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:19.750 22:56:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.011 22:56:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.011 22:56:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:14:20.011 22:56:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:14:20.273 true 00:14:20.273 22:56:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:20.273 22:56:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.534 22:56:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.534 22:56:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:14:20.534 22:56:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:14:20.795 true 00:14:20.795 22:56:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:20.795 22:56:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.056 22:56:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.056 22:56:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:14:21.056 22:56:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:14:21.317 true 00:14:21.317 22:56:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:21.317 22:56:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.578 22:56:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.578 22:56:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:14:21.578 22:56:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:14:21.839 true 00:14:21.839 22:56:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:21.839 22:56:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.839 22:56:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.100 22:56:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:14:22.100 22:56:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:14:22.361 true 00:14:22.361 22:56:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:22.361 22:56:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.361 22:56:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:22.623 22:56:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:14:22.623 22:56:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:14:22.623 true 00:14:22.884 22:56:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:22.884 22:56:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.884 22:56:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.144 22:56:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:14:23.144 22:56:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:14:23.144 true 00:14:23.144 22:56:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:23.144 22:56:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.405 22:56:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.667 22:56:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:14:23.667 22:56:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:14:23.667 true 00:14:23.667 22:56:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:23.667 22:56:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.929 22:56:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.189 22:56:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:14:24.189 22:56:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:14:24.189 true 00:14:24.189 22:56:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:24.189 22:56:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.449 22:56:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.449 22:56:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:14:24.449 22:56:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:14:24.710 true 00:14:24.710 22:56:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:24.710 22:56:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.972 22:56:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.972 22:56:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:14:24.972 22:56:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:14:25.234 true 00:14:25.234 22:56:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:25.234 22:56:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.495 22:56:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.495 22:56:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:14:25.495 22:56:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:14:25.757 true 00:14:25.757 22:56:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:25.757 22:56:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.018 22:56:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.018 22:56:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:14:26.018 22:56:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:14:26.280 true 00:14:26.280 22:56:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:26.280 22:56:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.280 22:56:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.542 22:56:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:14:26.542 22:56:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:14:26.803 true 00:14:26.803 22:56:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:26.803 22:56:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.803 22:56:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:26.803 Initializing NVMe Controllers 00:14:26.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:26.803 Controller IO queue size 128, less than required. 00:14:26.803 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:26.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:26.803 Initialization complete. Launching workers. 00:14:26.803 ======================================================== 00:14:26.803 Latency(us) 00:14:26.803 Device Information : IOPS MiB/s Average min max 00:14:26.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 23870.57 11.66 5362.29 3327.55 10544.57 00:14:26.803 ======================================================== 00:14:26.803 Total : 23870.57 11.66 5362.29 3327.55 10544.57 00:14:26.803 00:14:27.064 22:56:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1062 00:14:27.064 22:56:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:14:27.064 true 00:14:27.064 22:56:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 4011790 00:14:27.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4011790) - No such process 00:14:27.064 22:56:55 -- target/ns_hotplug_stress.sh@53 -- # wait 4011790 00:14:27.064 22:56:55 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.327 22:56:55 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.588 22:56:55 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:27.588 22:56:55 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:27.588 22:56:55 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:27.588 22:56:55 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:27.588 22:56:55 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:27.588 null0 00:14:27.588 22:56:55 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:27.588 22:56:55 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:27.588 22:56:55 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:27.850 null1 00:14:27.850 22:56:55 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:27.850 22:56:55 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:27.850 22:56:55 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:27.850 null2 00:14:27.850 22:56:56 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:27.850 22:56:56 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:27.850 22:56:56 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:28.112 null3 00:14:28.112 22:56:56 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:28.112 22:56:56 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:28.112 22:56:56 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:28.373 null4 00:14:28.373 22:56:56 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:28.373 22:56:56 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:28.373 22:56:56 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:28.373 null5 00:14:28.373 22:56:56 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:28.373 22:56:56 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:28.373 22:56:56 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:28.635 null6 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:28.635 null7 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@66 -- # wait 4018210 4018213 4018216 4018219 4018222 4018224 4018227 4018229 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:28.635 22:56:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.636 22:56:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:28.896 22:56:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:28.897 22:56:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:28.897 22:56:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:28.897 22:56:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:28.897 22:56:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:28.897 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:28.897 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:28.897 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.158 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.420 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.682 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:29.944 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.944 22:56:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.944 22:56:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:29.944 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:29.944 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:29.944 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:29.944 22:56:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:29.944 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:29.944 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:29.944 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:29.944 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:29.944 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.944 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.944 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:29.944 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:29.944 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:29.944 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.205 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:30.466 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:30.728 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:30.992 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.992 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.992 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:30.992 22:56:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:30.992 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.992 22:56:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.992 22:56:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:30.992 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:30.992 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:30.992 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:30.992 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.992 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.992 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:30.992 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.992 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.992 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:30.992 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:30.992 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:30.992 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:30.992 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:30.992 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.316 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:31.317 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.317 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.317 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:31.317 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:31.317 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:31.317 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:31.317 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.317 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.317 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:31.317 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.317 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.317 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.578 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:31.839 22:56:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:32.100 22:57:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.100 22:57:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.100 22:57:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.100 22:57:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.100 22:57:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.100 22:57:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.100 22:57:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:32.100 22:57:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.100 22:57:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.100 22:57:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.100 22:57:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.361 22:57:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:32.361 22:57:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:32.361 22:57:00 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:32.361 22:57:00 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:32.361 22:57:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:32.361 22:57:00 -- nvmf/common.sh@116 -- # sync 00:14:32.361 22:57:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:32.361 22:57:00 -- nvmf/common.sh@119 -- # set +e 00:14:32.361 22:57:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:32.361 22:57:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:32.361 rmmod nvme_tcp 00:14:32.361 rmmod nvme_fabrics 00:14:32.361 rmmod nvme_keyring 00:14:32.361 22:57:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:32.361 22:57:00 -- nvmf/common.sh@123 -- # set -e 00:14:32.361 22:57:00 -- nvmf/common.sh@124 -- # return 0 00:14:32.361 22:57:00 -- nvmf/common.sh@477 -- # '[' -n 4011181 ']' 00:14:32.361 22:57:00 -- nvmf/common.sh@478 -- # killprocess 4011181 00:14:32.361 22:57:00 -- common/autotest_common.sh@926 -- # '[' -z 4011181 ']' 00:14:32.361 22:57:00 -- common/autotest_common.sh@930 -- # kill -0 4011181 00:14:32.361 22:57:00 -- common/autotest_common.sh@931 -- # uname 00:14:32.361 22:57:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:32.361 22:57:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4011181 00:14:32.361 22:57:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:32.361 22:57:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:32.361 22:57:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4011181' 00:14:32.361 killing process with pid 4011181 00:14:32.361 22:57:00 -- common/autotest_common.sh@945 -- # kill 4011181 00:14:32.361 22:57:00 -- common/autotest_common.sh@950 -- # wait 4011181 00:14:32.622 22:57:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:32.622 22:57:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:32.622 22:57:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:32.622 22:57:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.622 22:57:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:32.622 22:57:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.622 22:57:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.622 22:57:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.538 22:57:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:34.538 00:14:34.538 real 0m47.591s 00:14:34.538 user 3m13.004s 00:14:34.538 sys 0m16.616s 00:14:34.538 22:57:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.538 22:57:02 -- common/autotest_common.sh@10 -- # set +x 00:14:34.538 ************************************ 00:14:34.538 END TEST nvmf_ns_hotplug_stress 00:14:34.538 ************************************ 00:14:34.538 22:57:02 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:34.538 22:57:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:34.538 22:57:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:34.538 22:57:02 -- common/autotest_common.sh@10 -- # set +x 00:14:34.538 ************************************ 00:14:34.538 START TEST nvmf_connect_stress 00:14:34.538 ************************************ 00:14:34.538 22:57:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:34.800 * Looking for test storage... 00:14:34.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:34.800 22:57:02 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.800 22:57:02 -- nvmf/common.sh@7 -- # uname -s 00:14:34.800 22:57:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.800 22:57:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.800 22:57:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.800 22:57:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.800 22:57:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.800 22:57:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.800 22:57:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.800 22:57:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.800 22:57:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.800 22:57:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.800 22:57:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:34.800 22:57:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:34.800 22:57:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.800 22:57:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.800 22:57:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.800 22:57:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.800 22:57:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.800 22:57:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.800 22:57:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.800 22:57:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.800 22:57:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.800 22:57:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.800 22:57:02 -- paths/export.sh@5 -- # export PATH 00:14:34.800 22:57:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.800 22:57:02 -- nvmf/common.sh@46 -- # : 0 00:14:34.800 22:57:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:34.800 22:57:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:34.800 22:57:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:34.800 22:57:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.800 22:57:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.800 22:57:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:34.800 22:57:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:34.800 22:57:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:34.800 22:57:02 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:34.800 22:57:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:34.800 22:57:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.800 22:57:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:34.800 22:57:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:34.800 22:57:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:34.800 22:57:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.800 22:57:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.800 22:57:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.800 22:57:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:34.800 22:57:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:34.800 22:57:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:34.800 22:57:02 -- common/autotest_common.sh@10 -- # set +x 00:14:41.393 22:57:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:41.393 22:57:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:41.393 22:57:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:41.394 22:57:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:41.394 22:57:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:41.394 22:57:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:41.394 22:57:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:41.394 22:57:09 -- nvmf/common.sh@294 -- # net_devs=() 00:14:41.394 22:57:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:41.394 22:57:09 -- nvmf/common.sh@295 -- # e810=() 00:14:41.394 22:57:09 -- nvmf/common.sh@295 -- # local -ga e810 00:14:41.394 22:57:09 -- nvmf/common.sh@296 -- # x722=() 00:14:41.394 22:57:09 -- nvmf/common.sh@296 -- # local -ga x722 00:14:41.394 22:57:09 -- nvmf/common.sh@297 -- # mlx=() 00:14:41.394 22:57:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:41.394 22:57:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.394 22:57:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.394 22:57:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.394 22:57:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.394 22:57:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.394 22:57:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.394 22:57:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.394 22:57:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.394 22:57:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.394 22:57:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.394 22:57:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.394 22:57:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:41.394 22:57:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:41.394 22:57:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:41.394 22:57:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:41.394 22:57:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:41.394 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:41.394 22:57:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:41.394 22:57:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:41.394 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:41.394 22:57:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:41.394 22:57:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:41.394 22:57:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.394 22:57:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:41.394 22:57:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.394 22:57:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:41.394 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:41.394 22:57:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.394 22:57:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:41.394 22:57:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.394 22:57:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:41.394 22:57:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.394 22:57:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:41.394 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:41.394 22:57:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.394 22:57:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:41.394 22:57:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:41.394 22:57:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:41.394 22:57:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:41.394 22:57:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.394 22:57:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.394 22:57:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.394 22:57:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:41.394 22:57:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.394 22:57:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.394 22:57:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:41.394 22:57:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.394 22:57:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.394 22:57:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:41.394 22:57:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:41.394 22:57:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.394 22:57:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.655 22:57:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.655 22:57:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.655 22:57:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:41.655 22:57:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.655 22:57:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.655 22:57:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.917 22:57:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:41.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:14:41.917 00:14:41.917 --- 10.0.0.2 ping statistics --- 00:14:41.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.917 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:14:41.917 22:57:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:14:41.917 00:14:41.917 --- 10.0.0.1 ping statistics --- 00:14:41.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.917 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:14:41.917 22:57:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.917 22:57:09 -- nvmf/common.sh@410 -- # return 0 00:14:41.917 22:57:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:41.917 22:57:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.917 22:57:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:41.917 22:57:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:41.917 22:57:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.917 22:57:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:41.917 22:57:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:41.917 22:57:09 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:41.917 22:57:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:41.917 22:57:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:41.917 22:57:09 -- common/autotest_common.sh@10 -- # set +x 00:14:41.917 22:57:09 -- nvmf/common.sh@469 -- # nvmfpid=4023689 00:14:41.917 22:57:09 -- nvmf/common.sh@470 -- # waitforlisten 4023689 00:14:41.917 22:57:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:41.917 22:57:09 -- common/autotest_common.sh@819 -- # '[' -z 4023689 ']' 00:14:41.917 22:57:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.917 22:57:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:41.917 22:57:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.917 22:57:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:41.917 22:57:09 -- common/autotest_common.sh@10 -- # set +x 00:14:41.917 [2024-06-09 22:57:09.957169] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:41.917 [2024-06-09 22:57:09.957234] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.917 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.917 [2024-06-09 22:57:10.027780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:42.178 [2024-06-09 22:57:10.102692] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:42.178 [2024-06-09 22:57:10.102817] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:42.178 [2024-06-09 22:57:10.102826] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:42.178 [2024-06-09 22:57:10.102835] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:42.178 [2024-06-09 22:57:10.102941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.178 [2024-06-09 22:57:10.103098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.178 [2024-06-09 22:57:10.103099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:42.751 22:57:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:42.751 22:57:10 -- common/autotest_common.sh@852 -- # return 0 00:14:42.751 22:57:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:42.751 22:57:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:42.751 22:57:10 -- common/autotest_common.sh@10 -- # set +x 00:14:42.751 22:57:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.751 22:57:10 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:42.751 22:57:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.751 22:57:10 -- common/autotest_common.sh@10 -- # set +x 00:14:42.751 [2024-06-09 22:57:10.775489] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.751 22:57:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.751 22:57:10 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:42.751 22:57:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.751 22:57:10 -- common/autotest_common.sh@10 -- # set +x 00:14:42.751 22:57:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.751 22:57:10 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.751 22:57:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.751 22:57:10 -- common/autotest_common.sh@10 -- # set +x 00:14:42.751 [2024-06-09 22:57:10.799865] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.751 22:57:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.751 22:57:10 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:42.751 22:57:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.751 22:57:10 -- common/autotest_common.sh@10 -- # set +x 00:14:42.751 NULL1 00:14:42.751 22:57:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.751 22:57:10 -- target/connect_stress.sh@21 -- # PERF_PID=4023937 00:14:42.752 22:57:10 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:42.752 22:57:10 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:42.752 22:57:10 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:42.752 22:57:10 -- target/connect_stress.sh@28 -- # cat 00:14:42.752 22:57:10 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:42.752 22:57:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.752 22:57:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.752 22:57:10 -- common/autotest_common.sh@10 -- # set +x 00:14:43.325 22:57:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.325 22:57:11 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:43.325 22:57:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.325 22:57:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.325 22:57:11 -- common/autotest_common.sh@10 -- # set +x 00:14:43.587 22:57:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.587 22:57:11 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:43.587 22:57:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.587 22:57:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.587 22:57:11 -- common/autotest_common.sh@10 -- # set +x 00:14:43.847 22:57:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.847 22:57:11 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:43.847 22:57:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.847 22:57:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.847 22:57:11 -- common/autotest_common.sh@10 -- # set +x 00:14:44.106 22:57:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.106 22:57:12 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:44.106 22:57:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.106 22:57:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.106 22:57:12 -- common/autotest_common.sh@10 -- # set +x 00:14:44.366 22:57:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.626 22:57:12 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:44.626 22:57:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.626 22:57:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.626 22:57:12 -- common/autotest_common.sh@10 -- # set +x 00:14:44.886 22:57:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.886 22:57:12 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:44.886 22:57:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.886 22:57:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.886 22:57:12 -- common/autotest_common.sh@10 -- # set +x 00:14:45.147 22:57:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.147 22:57:13 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:45.147 22:57:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.147 22:57:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.147 22:57:13 -- common/autotest_common.sh@10 -- # set +x 00:14:45.407 22:57:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.407 22:57:13 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:45.407 22:57:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.407 22:57:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.407 22:57:13 -- common/autotest_common.sh@10 -- # set +x 00:14:45.667 22:57:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.667 22:57:13 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:45.667 22:57:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.928 22:57:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.928 22:57:13 -- common/autotest_common.sh@10 -- # set +x 00:14:46.189 22:57:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.189 22:57:14 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:46.189 22:57:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.189 22:57:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.189 22:57:14 -- common/autotest_common.sh@10 -- # set +x 00:14:46.451 22:57:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.451 22:57:14 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:46.451 22:57:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.451 22:57:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.451 22:57:14 -- common/autotest_common.sh@10 -- # set +x 00:14:46.712 22:57:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.712 22:57:14 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:46.712 22:57:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.712 22:57:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.712 22:57:14 -- common/autotest_common.sh@10 -- # set +x 00:14:46.974 22:57:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.974 22:57:15 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:46.974 22:57:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.974 22:57:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.974 22:57:15 -- common/autotest_common.sh@10 -- # set +x 00:14:47.546 22:57:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.546 22:57:15 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:47.546 22:57:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.546 22:57:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.546 22:57:15 -- common/autotest_common.sh@10 -- # set +x 00:14:47.807 22:57:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.807 22:57:15 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:47.807 22:57:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.807 22:57:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.807 22:57:15 -- common/autotest_common.sh@10 -- # set +x 00:14:48.069 22:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.069 22:57:16 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:48.069 22:57:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.069 22:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.069 22:57:16 -- common/autotest_common.sh@10 -- # set +x 00:14:48.331 22:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.331 22:57:16 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:48.331 22:57:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.331 22:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.331 22:57:16 -- common/autotest_common.sh@10 -- # set +x 00:14:48.592 22:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:48.592 22:57:16 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:48.853 22:57:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.853 22:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:48.853 22:57:16 -- common/autotest_common.sh@10 -- # set +x 00:14:49.114 22:57:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.114 22:57:17 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:49.114 22:57:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.114 22:57:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.114 22:57:17 -- common/autotest_common.sh@10 -- # set +x 00:14:49.413 22:57:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.413 22:57:17 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:49.413 22:57:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.413 22:57:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.413 22:57:17 -- common/autotest_common.sh@10 -- # set +x 00:14:49.688 22:57:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.688 22:57:17 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:49.688 22:57:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.688 22:57:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.688 22:57:17 -- common/autotest_common.sh@10 -- # set +x 00:14:49.949 22:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.949 22:57:18 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:49.949 22:57:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.949 22:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.949 22:57:18 -- common/autotest_common.sh@10 -- # set +x 00:14:50.521 22:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.521 22:57:18 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:50.521 22:57:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.521 22:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.521 22:57:18 -- common/autotest_common.sh@10 -- # set +x 00:14:50.782 22:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.782 22:57:18 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:50.782 22:57:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.782 22:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.782 22:57:18 -- common/autotest_common.sh@10 -- # set +x 00:14:51.043 22:57:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.043 22:57:19 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:51.043 22:57:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.043 22:57:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.043 22:57:19 -- common/autotest_common.sh@10 -- # set +x 00:14:51.304 22:57:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.304 22:57:19 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:51.304 22:57:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.304 22:57:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.304 22:57:19 -- common/autotest_common.sh@10 -- # set +x 00:14:51.565 22:57:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.565 22:57:19 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:51.565 22:57:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.565 22:57:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.565 22:57:19 -- common/autotest_common.sh@10 -- # set +x 00:14:52.137 22:57:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.137 22:57:20 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:52.137 22:57:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.137 22:57:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.137 22:57:20 -- common/autotest_common.sh@10 -- # set +x 00:14:52.397 22:57:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.397 22:57:20 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:52.397 22:57:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.397 22:57:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.397 22:57:20 -- common/autotest_common.sh@10 -- # set +x 00:14:52.658 22:57:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.658 22:57:20 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:52.658 22:57:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.658 22:57:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.658 22:57:20 -- common/autotest_common.sh@10 -- # set +x 00:14:52.919 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:52.919 22:57:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.919 22:57:21 -- target/connect_stress.sh@34 -- # kill -0 4023937 00:14:52.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4023937) - No such process 00:14:52.919 22:57:21 -- target/connect_stress.sh@38 -- # wait 4023937 00:14:52.919 22:57:21 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:52.919 22:57:21 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:52.919 22:57:21 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:52.919 22:57:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:52.919 22:57:21 -- nvmf/common.sh@116 -- # sync 00:14:52.919 22:57:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:52.919 22:57:21 -- nvmf/common.sh@119 -- # set +e 00:14:52.919 22:57:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:52.919 22:57:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:52.919 rmmod nvme_tcp 00:14:52.919 rmmod nvme_fabrics 00:14:52.919 rmmod nvme_keyring 00:14:52.919 22:57:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:52.919 22:57:21 -- nvmf/common.sh@123 -- # set -e 00:14:52.919 22:57:21 -- nvmf/common.sh@124 -- # return 0 00:14:52.919 22:57:21 -- nvmf/common.sh@477 -- # '[' -n 4023689 ']' 00:14:52.919 22:57:21 -- nvmf/common.sh@478 -- # killprocess 4023689 00:14:52.919 22:57:21 -- common/autotest_common.sh@926 -- # '[' -z 4023689 ']' 00:14:52.919 22:57:21 -- common/autotest_common.sh@930 -- # kill -0 4023689 00:14:52.919 22:57:21 -- common/autotest_common.sh@931 -- # uname 00:14:52.919 22:57:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:52.919 22:57:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4023689 00:14:53.181 22:57:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:53.181 22:57:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:53.181 22:57:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4023689' 00:14:53.181 killing process with pid 4023689 00:14:53.181 22:57:21 -- common/autotest_common.sh@945 -- # kill 4023689 00:14:53.181 22:57:21 -- common/autotest_common.sh@950 -- # wait 4023689 00:14:53.181 22:57:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:53.181 22:57:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:53.181 22:57:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:53.181 22:57:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.181 22:57:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:53.181 22:57:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.181 22:57:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.181 22:57:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.733 22:57:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:55.733 00:14:55.733 real 0m20.668s 00:14:55.733 user 0m41.953s 00:14:55.733 sys 0m8.569s 00:14:55.733 22:57:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.733 22:57:23 -- common/autotest_common.sh@10 -- # set +x 00:14:55.733 ************************************ 00:14:55.733 END TEST nvmf_connect_stress 00:14:55.733 ************************************ 00:14:55.733 22:57:23 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:55.733 22:57:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:55.733 22:57:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:55.733 22:57:23 -- common/autotest_common.sh@10 -- # set +x 00:14:55.733 ************************************ 00:14:55.733 START TEST nvmf_fused_ordering 00:14:55.733 ************************************ 00:14:55.733 22:57:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:55.733 * Looking for test storage... 00:14:55.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.733 22:57:23 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.733 22:57:23 -- nvmf/common.sh@7 -- # uname -s 00:14:55.733 22:57:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.733 22:57:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.733 22:57:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.733 22:57:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.733 22:57:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.733 22:57:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.733 22:57:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.733 22:57:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.733 22:57:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.733 22:57:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.733 22:57:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:55.733 22:57:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:55.733 22:57:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.733 22:57:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.733 22:57:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.733 22:57:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.733 22:57:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.733 22:57:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.733 22:57:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.733 22:57:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.733 22:57:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.733 22:57:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.733 22:57:23 -- paths/export.sh@5 -- # export PATH 00:14:55.733 22:57:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.733 22:57:23 -- nvmf/common.sh@46 -- # : 0 00:14:55.733 22:57:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:55.733 22:57:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:55.733 22:57:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:55.733 22:57:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.733 22:57:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.733 22:57:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:55.733 22:57:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:55.733 22:57:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:55.733 22:57:23 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:55.733 22:57:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:55.733 22:57:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.733 22:57:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:55.733 22:57:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:55.733 22:57:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:55.733 22:57:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.733 22:57:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.733 22:57:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.733 22:57:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:55.733 22:57:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:55.733 22:57:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:55.733 22:57:23 -- common/autotest_common.sh@10 -- # set +x 00:15:02.330 22:57:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:02.330 22:57:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:02.330 22:57:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:02.330 22:57:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:02.330 22:57:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:02.330 22:57:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:02.330 22:57:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:02.330 22:57:30 -- nvmf/common.sh@294 -- # net_devs=() 00:15:02.330 22:57:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:02.330 22:57:30 -- nvmf/common.sh@295 -- # e810=() 00:15:02.330 22:57:30 -- nvmf/common.sh@295 -- # local -ga e810 00:15:02.330 22:57:30 -- nvmf/common.sh@296 -- # x722=() 00:15:02.330 22:57:30 -- nvmf/common.sh@296 -- # local -ga x722 00:15:02.330 22:57:30 -- nvmf/common.sh@297 -- # mlx=() 00:15:02.330 22:57:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:02.330 22:57:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.330 22:57:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.330 22:57:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.330 22:57:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.330 22:57:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.330 22:57:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.330 22:57:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.330 22:57:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.330 22:57:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.330 22:57:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.330 22:57:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.330 22:57:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:02.330 22:57:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:02.330 22:57:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:02.330 22:57:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:02.330 22:57:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:02.330 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:02.330 22:57:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:02.330 22:57:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:02.330 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:02.330 22:57:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:02.330 22:57:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:02.330 22:57:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.330 22:57:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:02.330 22:57:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.330 22:57:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:02.330 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:02.330 22:57:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.330 22:57:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:02.330 22:57:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.330 22:57:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:02.330 22:57:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.330 22:57:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:02.330 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:02.330 22:57:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.330 22:57:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:02.330 22:57:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:02.330 22:57:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:02.330 22:57:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:02.330 22:57:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.330 22:57:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.330 22:57:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.330 22:57:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:02.330 22:57:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.330 22:57:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.330 22:57:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:02.330 22:57:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.330 22:57:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.330 22:57:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:02.330 22:57:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:02.330 22:57:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.330 22:57:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.330 22:57:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.330 22:57:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.330 22:57:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:02.330 22:57:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.330 22:57:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.330 22:57:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.591 22:57:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:02.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:15:02.591 00:15:02.592 --- 10.0.0.2 ping statistics --- 00:15:02.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.592 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:15:02.592 22:57:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.444 ms 00:15:02.592 00:15:02.592 --- 10.0.0.1 ping statistics --- 00:15:02.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.592 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:15:02.592 22:57:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.592 22:57:30 -- nvmf/common.sh@410 -- # return 0 00:15:02.592 22:57:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:02.592 22:57:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.592 22:57:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:02.592 22:57:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:02.592 22:57:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.592 22:57:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:02.592 22:57:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:02.592 22:57:30 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:02.592 22:57:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:02.592 22:57:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:02.592 22:57:30 -- common/autotest_common.sh@10 -- # set +x 00:15:02.592 22:57:30 -- nvmf/common.sh@469 -- # nvmfpid=4030333 00:15:02.592 22:57:30 -- nvmf/common.sh@470 -- # waitforlisten 4030333 00:15:02.592 22:57:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:02.592 22:57:30 -- common/autotest_common.sh@819 -- # '[' -z 4030333 ']' 00:15:02.592 22:57:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.592 22:57:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:02.592 22:57:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.592 22:57:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:02.592 22:57:30 -- common/autotest_common.sh@10 -- # set +x 00:15:02.592 [2024-06-09 22:57:30.638016] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:02.592 [2024-06-09 22:57:30.638084] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.592 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.592 [2024-06-09 22:57:30.709665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.853 [2024-06-09 22:57:30.780347] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:02.853 [2024-06-09 22:57:30.780474] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.853 [2024-06-09 22:57:30.780483] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.853 [2024-06-09 22:57:30.780490] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.853 [2024-06-09 22:57:30.780508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.425 22:57:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:03.425 22:57:31 -- common/autotest_common.sh@852 -- # return 0 00:15:03.425 22:57:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:03.425 22:57:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:03.425 22:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:03.425 22:57:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.426 22:57:31 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:03.426 22:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.426 22:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:03.426 [2024-06-09 22:57:31.438899] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.426 22:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.426 22:57:31 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:03.426 22:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.426 22:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:03.426 22:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.426 22:57:31 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:03.426 22:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.426 22:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:03.426 [2024-06-09 22:57:31.463062] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.426 22:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.426 22:57:31 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:03.426 22:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.426 22:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:03.426 NULL1 00:15:03.426 22:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.426 22:57:31 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:03.426 22:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.426 22:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:03.426 22:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.426 22:57:31 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:03.426 22:57:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.426 22:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:03.426 22:57:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.426 22:57:31 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:03.426 [2024-06-09 22:57:31.526132] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:03.426 [2024-06-09 22:57:31.526194] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030421 ] 00:15:03.426 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.369 Attached to nqn.2016-06.io.spdk:cnode1 00:15:04.370 Namespace ID: 1 size: 1GB 00:15:04.370 fused_ordering(0) 00:15:04.370 fused_ordering(1) 00:15:04.370 fused_ordering(2) 00:15:04.370 fused_ordering(3) 00:15:04.370 fused_ordering(4) 00:15:04.370 fused_ordering(5) 00:15:04.370 fused_ordering(6) 00:15:04.370 fused_ordering(7) 00:15:04.370 fused_ordering(8) 00:15:04.370 fused_ordering(9) 00:15:04.370 fused_ordering(10) 00:15:04.370 fused_ordering(11) 00:15:04.370 fused_ordering(12) 00:15:04.370 fused_ordering(13) 00:15:04.370 fused_ordering(14) 00:15:04.370 fused_ordering(15) 00:15:04.370 fused_ordering(16) 00:15:04.370 fused_ordering(17) 00:15:04.370 fused_ordering(18) 00:15:04.370 fused_ordering(19) 00:15:04.370 fused_ordering(20) 00:15:04.370 fused_ordering(21) 00:15:04.370 fused_ordering(22) 00:15:04.370 fused_ordering(23) 00:15:04.370 fused_ordering(24) 00:15:04.370 fused_ordering(25) 00:15:04.370 fused_ordering(26) 00:15:04.370 fused_ordering(27) 00:15:04.370 fused_ordering(28) 00:15:04.370 fused_ordering(29) 00:15:04.370 fused_ordering(30) 00:15:04.370 fused_ordering(31) 00:15:04.370 fused_ordering(32) 00:15:04.370 fused_ordering(33) 00:15:04.370 fused_ordering(34) 00:15:04.370 fused_ordering(35) 00:15:04.370 fused_ordering(36) 00:15:04.370 fused_ordering(37) 00:15:04.370 fused_ordering(38) 00:15:04.370 fused_ordering(39) 00:15:04.370 fused_ordering(40) 00:15:04.370 fused_ordering(41) 00:15:04.370 fused_ordering(42) 00:15:04.370 fused_ordering(43) 00:15:04.370 fused_ordering(44) 00:15:04.370 fused_ordering(45) 00:15:04.370 fused_ordering(46) 00:15:04.370 fused_ordering(47) 00:15:04.370 fused_ordering(48) 00:15:04.370 fused_ordering(49) 00:15:04.370 fused_ordering(50) 00:15:04.370 fused_ordering(51) 00:15:04.370 fused_ordering(52) 00:15:04.370 fused_ordering(53) 00:15:04.370 fused_ordering(54) 00:15:04.370 fused_ordering(55) 00:15:04.370 fused_ordering(56) 00:15:04.370 fused_ordering(57) 00:15:04.370 fused_ordering(58) 00:15:04.370 fused_ordering(59) 00:15:04.370 fused_ordering(60) 00:15:04.370 fused_ordering(61) 00:15:04.370 fused_ordering(62) 00:15:04.370 fused_ordering(63) 00:15:04.370 fused_ordering(64) 00:15:04.370 fused_ordering(65) 00:15:04.370 fused_ordering(66) 00:15:04.370 fused_ordering(67) 00:15:04.370 fused_ordering(68) 00:15:04.370 fused_ordering(69) 00:15:04.370 fused_ordering(70) 00:15:04.370 fused_ordering(71) 00:15:04.370 fused_ordering(72) 00:15:04.370 fused_ordering(73) 00:15:04.370 fused_ordering(74) 00:15:04.370 fused_ordering(75) 00:15:04.370 fused_ordering(76) 00:15:04.370 fused_ordering(77) 00:15:04.370 fused_ordering(78) 00:15:04.370 fused_ordering(79) 00:15:04.370 fused_ordering(80) 00:15:04.370 fused_ordering(81) 00:15:04.370 fused_ordering(82) 00:15:04.370 fused_ordering(83) 00:15:04.370 fused_ordering(84) 00:15:04.370 fused_ordering(85) 00:15:04.370 fused_ordering(86) 00:15:04.370 fused_ordering(87) 00:15:04.370 fused_ordering(88) 00:15:04.370 fused_ordering(89) 00:15:04.370 fused_ordering(90) 00:15:04.370 fused_ordering(91) 00:15:04.370 fused_ordering(92) 00:15:04.370 fused_ordering(93) 00:15:04.370 fused_ordering(94) 00:15:04.370 fused_ordering(95) 00:15:04.370 fused_ordering(96) 00:15:04.370 fused_ordering(97) 00:15:04.370 fused_ordering(98) 00:15:04.370 fused_ordering(99) 00:15:04.370 fused_ordering(100) 00:15:04.370 fused_ordering(101) 00:15:04.370 fused_ordering(102) 00:15:04.370 fused_ordering(103) 00:15:04.370 fused_ordering(104) 00:15:04.370 fused_ordering(105) 00:15:04.370 fused_ordering(106) 00:15:04.370 fused_ordering(107) 00:15:04.370 fused_ordering(108) 00:15:04.370 fused_ordering(109) 00:15:04.370 fused_ordering(110) 00:15:04.370 fused_ordering(111) 00:15:04.370 fused_ordering(112) 00:15:04.370 fused_ordering(113) 00:15:04.370 fused_ordering(114) 00:15:04.370 fused_ordering(115) 00:15:04.370 fused_ordering(116) 00:15:04.370 fused_ordering(117) 00:15:04.370 fused_ordering(118) 00:15:04.370 fused_ordering(119) 00:15:04.370 fused_ordering(120) 00:15:04.370 fused_ordering(121) 00:15:04.370 fused_ordering(122) 00:15:04.370 fused_ordering(123) 00:15:04.370 fused_ordering(124) 00:15:04.370 fused_ordering(125) 00:15:04.370 fused_ordering(126) 00:15:04.370 fused_ordering(127) 00:15:04.370 fused_ordering(128) 00:15:04.370 fused_ordering(129) 00:15:04.370 fused_ordering(130) 00:15:04.370 fused_ordering(131) 00:15:04.370 fused_ordering(132) 00:15:04.370 fused_ordering(133) 00:15:04.370 fused_ordering(134) 00:15:04.370 fused_ordering(135) 00:15:04.370 fused_ordering(136) 00:15:04.370 fused_ordering(137) 00:15:04.370 fused_ordering(138) 00:15:04.370 fused_ordering(139) 00:15:04.370 fused_ordering(140) 00:15:04.370 fused_ordering(141) 00:15:04.370 fused_ordering(142) 00:15:04.370 fused_ordering(143) 00:15:04.370 fused_ordering(144) 00:15:04.370 fused_ordering(145) 00:15:04.370 fused_ordering(146) 00:15:04.370 fused_ordering(147) 00:15:04.370 fused_ordering(148) 00:15:04.370 fused_ordering(149) 00:15:04.370 fused_ordering(150) 00:15:04.370 fused_ordering(151) 00:15:04.370 fused_ordering(152) 00:15:04.370 fused_ordering(153) 00:15:04.370 fused_ordering(154) 00:15:04.370 fused_ordering(155) 00:15:04.370 fused_ordering(156) 00:15:04.370 fused_ordering(157) 00:15:04.370 fused_ordering(158) 00:15:04.370 fused_ordering(159) 00:15:04.370 fused_ordering(160) 00:15:04.370 fused_ordering(161) 00:15:04.370 fused_ordering(162) 00:15:04.370 fused_ordering(163) 00:15:04.370 fused_ordering(164) 00:15:04.370 fused_ordering(165) 00:15:04.370 fused_ordering(166) 00:15:04.370 fused_ordering(167) 00:15:04.370 fused_ordering(168) 00:15:04.370 fused_ordering(169) 00:15:04.370 fused_ordering(170) 00:15:04.370 fused_ordering(171) 00:15:04.370 fused_ordering(172) 00:15:04.370 fused_ordering(173) 00:15:04.370 fused_ordering(174) 00:15:04.370 fused_ordering(175) 00:15:04.370 fused_ordering(176) 00:15:04.370 fused_ordering(177) 00:15:04.370 fused_ordering(178) 00:15:04.370 fused_ordering(179) 00:15:04.370 fused_ordering(180) 00:15:04.370 fused_ordering(181) 00:15:04.370 fused_ordering(182) 00:15:04.370 fused_ordering(183) 00:15:04.370 fused_ordering(184) 00:15:04.370 fused_ordering(185) 00:15:04.370 fused_ordering(186) 00:15:04.370 fused_ordering(187) 00:15:04.370 fused_ordering(188) 00:15:04.370 fused_ordering(189) 00:15:04.370 fused_ordering(190) 00:15:04.370 fused_ordering(191) 00:15:04.370 fused_ordering(192) 00:15:04.370 fused_ordering(193) 00:15:04.370 fused_ordering(194) 00:15:04.370 fused_ordering(195) 00:15:04.370 fused_ordering(196) 00:15:04.370 fused_ordering(197) 00:15:04.370 fused_ordering(198) 00:15:04.370 fused_ordering(199) 00:15:04.370 fused_ordering(200) 00:15:04.370 fused_ordering(201) 00:15:04.370 fused_ordering(202) 00:15:04.370 fused_ordering(203) 00:15:04.370 fused_ordering(204) 00:15:04.370 fused_ordering(205) 00:15:04.943 fused_ordering(206) 00:15:04.943 fused_ordering(207) 00:15:04.943 fused_ordering(208) 00:15:04.943 fused_ordering(209) 00:15:04.943 fused_ordering(210) 00:15:04.943 fused_ordering(211) 00:15:04.943 fused_ordering(212) 00:15:04.943 fused_ordering(213) 00:15:04.944 fused_ordering(214) 00:15:04.944 fused_ordering(215) 00:15:04.944 fused_ordering(216) 00:15:04.944 fused_ordering(217) 00:15:04.944 fused_ordering(218) 00:15:04.944 fused_ordering(219) 00:15:04.944 fused_ordering(220) 00:15:04.944 fused_ordering(221) 00:15:04.944 fused_ordering(222) 00:15:04.944 fused_ordering(223) 00:15:04.944 fused_ordering(224) 00:15:04.944 fused_ordering(225) 00:15:04.944 fused_ordering(226) 00:15:04.944 fused_ordering(227) 00:15:04.944 fused_ordering(228) 00:15:04.944 fused_ordering(229) 00:15:04.944 fused_ordering(230) 00:15:04.944 fused_ordering(231) 00:15:04.944 fused_ordering(232) 00:15:04.944 fused_ordering(233) 00:15:04.944 fused_ordering(234) 00:15:04.944 fused_ordering(235) 00:15:04.944 fused_ordering(236) 00:15:04.944 fused_ordering(237) 00:15:04.944 fused_ordering(238) 00:15:04.944 fused_ordering(239) 00:15:04.944 fused_ordering(240) 00:15:04.944 fused_ordering(241) 00:15:04.944 fused_ordering(242) 00:15:04.944 fused_ordering(243) 00:15:04.944 fused_ordering(244) 00:15:04.944 fused_ordering(245) 00:15:04.944 fused_ordering(246) 00:15:04.944 fused_ordering(247) 00:15:04.944 fused_ordering(248) 00:15:04.944 fused_ordering(249) 00:15:04.944 fused_ordering(250) 00:15:04.944 fused_ordering(251) 00:15:04.944 fused_ordering(252) 00:15:04.944 fused_ordering(253) 00:15:04.944 fused_ordering(254) 00:15:04.944 fused_ordering(255) 00:15:04.944 fused_ordering(256) 00:15:04.944 fused_ordering(257) 00:15:04.944 fused_ordering(258) 00:15:04.944 fused_ordering(259) 00:15:04.944 fused_ordering(260) 00:15:04.944 fused_ordering(261) 00:15:04.944 fused_ordering(262) 00:15:04.944 fused_ordering(263) 00:15:04.944 fused_ordering(264) 00:15:04.944 fused_ordering(265) 00:15:04.944 fused_ordering(266) 00:15:04.944 fused_ordering(267) 00:15:04.944 fused_ordering(268) 00:15:04.944 fused_ordering(269) 00:15:04.944 fused_ordering(270) 00:15:04.944 fused_ordering(271) 00:15:04.944 fused_ordering(272) 00:15:04.944 fused_ordering(273) 00:15:04.944 fused_ordering(274) 00:15:04.944 fused_ordering(275) 00:15:04.944 fused_ordering(276) 00:15:04.944 fused_ordering(277) 00:15:04.944 fused_ordering(278) 00:15:04.944 fused_ordering(279) 00:15:04.944 fused_ordering(280) 00:15:04.944 fused_ordering(281) 00:15:04.944 fused_ordering(282) 00:15:04.944 fused_ordering(283) 00:15:04.944 fused_ordering(284) 00:15:04.944 fused_ordering(285) 00:15:04.944 fused_ordering(286) 00:15:04.944 fused_ordering(287) 00:15:04.944 fused_ordering(288) 00:15:04.944 fused_ordering(289) 00:15:04.944 fused_ordering(290) 00:15:04.944 fused_ordering(291) 00:15:04.944 fused_ordering(292) 00:15:04.944 fused_ordering(293) 00:15:04.944 fused_ordering(294) 00:15:04.944 fused_ordering(295) 00:15:04.944 fused_ordering(296) 00:15:04.944 fused_ordering(297) 00:15:04.944 fused_ordering(298) 00:15:04.944 fused_ordering(299) 00:15:04.944 fused_ordering(300) 00:15:04.944 fused_ordering(301) 00:15:04.944 fused_ordering(302) 00:15:04.944 fused_ordering(303) 00:15:04.944 fused_ordering(304) 00:15:04.944 fused_ordering(305) 00:15:04.944 fused_ordering(306) 00:15:04.944 fused_ordering(307) 00:15:04.944 fused_ordering(308) 00:15:04.944 fused_ordering(309) 00:15:04.944 fused_ordering(310) 00:15:04.944 fused_ordering(311) 00:15:04.944 fused_ordering(312) 00:15:04.944 fused_ordering(313) 00:15:04.944 fused_ordering(314) 00:15:04.944 fused_ordering(315) 00:15:04.944 fused_ordering(316) 00:15:04.944 fused_ordering(317) 00:15:04.944 fused_ordering(318) 00:15:04.944 fused_ordering(319) 00:15:04.944 fused_ordering(320) 00:15:04.944 fused_ordering(321) 00:15:04.944 fused_ordering(322) 00:15:04.944 fused_ordering(323) 00:15:04.944 fused_ordering(324) 00:15:04.944 fused_ordering(325) 00:15:04.944 fused_ordering(326) 00:15:04.944 fused_ordering(327) 00:15:04.944 fused_ordering(328) 00:15:04.944 fused_ordering(329) 00:15:04.944 fused_ordering(330) 00:15:04.944 fused_ordering(331) 00:15:04.944 fused_ordering(332) 00:15:04.944 fused_ordering(333) 00:15:04.944 fused_ordering(334) 00:15:04.944 fused_ordering(335) 00:15:04.944 fused_ordering(336) 00:15:04.944 fused_ordering(337) 00:15:04.944 fused_ordering(338) 00:15:04.944 fused_ordering(339) 00:15:04.944 fused_ordering(340) 00:15:04.944 fused_ordering(341) 00:15:04.944 fused_ordering(342) 00:15:04.944 fused_ordering(343) 00:15:04.944 fused_ordering(344) 00:15:04.944 fused_ordering(345) 00:15:04.944 fused_ordering(346) 00:15:04.944 fused_ordering(347) 00:15:04.944 fused_ordering(348) 00:15:04.944 fused_ordering(349) 00:15:04.944 fused_ordering(350) 00:15:04.944 fused_ordering(351) 00:15:04.944 fused_ordering(352) 00:15:04.944 fused_ordering(353) 00:15:04.944 fused_ordering(354) 00:15:04.944 fused_ordering(355) 00:15:04.944 fused_ordering(356) 00:15:04.944 fused_ordering(357) 00:15:04.944 fused_ordering(358) 00:15:04.944 fused_ordering(359) 00:15:04.944 fused_ordering(360) 00:15:04.944 fused_ordering(361) 00:15:04.944 fused_ordering(362) 00:15:04.944 fused_ordering(363) 00:15:04.944 fused_ordering(364) 00:15:04.944 fused_ordering(365) 00:15:04.944 fused_ordering(366) 00:15:04.944 fused_ordering(367) 00:15:04.944 fused_ordering(368) 00:15:04.944 fused_ordering(369) 00:15:04.944 fused_ordering(370) 00:15:04.944 fused_ordering(371) 00:15:04.944 fused_ordering(372) 00:15:04.944 fused_ordering(373) 00:15:04.944 fused_ordering(374) 00:15:04.944 fused_ordering(375) 00:15:04.944 fused_ordering(376) 00:15:04.944 fused_ordering(377) 00:15:04.944 fused_ordering(378) 00:15:04.944 fused_ordering(379) 00:15:04.944 fused_ordering(380) 00:15:04.944 fused_ordering(381) 00:15:04.944 fused_ordering(382) 00:15:04.944 fused_ordering(383) 00:15:04.944 fused_ordering(384) 00:15:04.944 fused_ordering(385) 00:15:04.944 fused_ordering(386) 00:15:04.944 fused_ordering(387) 00:15:04.944 fused_ordering(388) 00:15:04.944 fused_ordering(389) 00:15:04.944 fused_ordering(390) 00:15:04.944 fused_ordering(391) 00:15:04.944 fused_ordering(392) 00:15:04.944 fused_ordering(393) 00:15:04.944 fused_ordering(394) 00:15:04.944 fused_ordering(395) 00:15:04.944 fused_ordering(396) 00:15:04.944 fused_ordering(397) 00:15:04.944 fused_ordering(398) 00:15:04.944 fused_ordering(399) 00:15:04.944 fused_ordering(400) 00:15:04.944 fused_ordering(401) 00:15:04.944 fused_ordering(402) 00:15:04.944 fused_ordering(403) 00:15:04.944 fused_ordering(404) 00:15:04.944 fused_ordering(405) 00:15:04.944 fused_ordering(406) 00:15:04.944 fused_ordering(407) 00:15:04.944 fused_ordering(408) 00:15:04.944 fused_ordering(409) 00:15:04.944 fused_ordering(410) 00:15:05.888 fused_ordering(411) 00:15:05.888 fused_ordering(412) 00:15:05.888 fused_ordering(413) 00:15:05.888 fused_ordering(414) 00:15:05.888 fused_ordering(415) 00:15:05.888 fused_ordering(416) 00:15:05.888 fused_ordering(417) 00:15:05.888 fused_ordering(418) 00:15:05.888 fused_ordering(419) 00:15:05.888 fused_ordering(420) 00:15:05.888 fused_ordering(421) 00:15:05.888 fused_ordering(422) 00:15:05.888 fused_ordering(423) 00:15:05.888 fused_ordering(424) 00:15:05.888 fused_ordering(425) 00:15:05.888 fused_ordering(426) 00:15:05.888 fused_ordering(427) 00:15:05.888 fused_ordering(428) 00:15:05.888 fused_ordering(429) 00:15:05.888 fused_ordering(430) 00:15:05.888 fused_ordering(431) 00:15:05.888 fused_ordering(432) 00:15:05.888 fused_ordering(433) 00:15:05.888 fused_ordering(434) 00:15:05.888 fused_ordering(435) 00:15:05.888 fused_ordering(436) 00:15:05.888 fused_ordering(437) 00:15:05.888 fused_ordering(438) 00:15:05.888 fused_ordering(439) 00:15:05.888 fused_ordering(440) 00:15:05.888 fused_ordering(441) 00:15:05.888 fused_ordering(442) 00:15:05.888 fused_ordering(443) 00:15:05.888 fused_ordering(444) 00:15:05.888 fused_ordering(445) 00:15:05.888 fused_ordering(446) 00:15:05.888 fused_ordering(447) 00:15:05.888 fused_ordering(448) 00:15:05.888 fused_ordering(449) 00:15:05.888 fused_ordering(450) 00:15:05.888 fused_ordering(451) 00:15:05.888 fused_ordering(452) 00:15:05.888 fused_ordering(453) 00:15:05.888 fused_ordering(454) 00:15:05.888 fused_ordering(455) 00:15:05.888 fused_ordering(456) 00:15:05.888 fused_ordering(457) 00:15:05.888 fused_ordering(458) 00:15:05.888 fused_ordering(459) 00:15:05.888 fused_ordering(460) 00:15:05.888 fused_ordering(461) 00:15:05.888 fused_ordering(462) 00:15:05.888 fused_ordering(463) 00:15:05.888 fused_ordering(464) 00:15:05.888 fused_ordering(465) 00:15:05.888 fused_ordering(466) 00:15:05.888 fused_ordering(467) 00:15:05.888 fused_ordering(468) 00:15:05.888 fused_ordering(469) 00:15:05.888 fused_ordering(470) 00:15:05.888 fused_ordering(471) 00:15:05.888 fused_ordering(472) 00:15:05.888 fused_ordering(473) 00:15:05.888 fused_ordering(474) 00:15:05.888 fused_ordering(475) 00:15:05.888 fused_ordering(476) 00:15:05.888 fused_ordering(477) 00:15:05.888 fused_ordering(478) 00:15:05.888 fused_ordering(479) 00:15:05.888 fused_ordering(480) 00:15:05.888 fused_ordering(481) 00:15:05.888 fused_ordering(482) 00:15:05.888 fused_ordering(483) 00:15:05.889 fused_ordering(484) 00:15:05.889 fused_ordering(485) 00:15:05.889 fused_ordering(486) 00:15:05.889 fused_ordering(487) 00:15:05.889 fused_ordering(488) 00:15:05.889 fused_ordering(489) 00:15:05.889 fused_ordering(490) 00:15:05.889 fused_ordering(491) 00:15:05.889 fused_ordering(492) 00:15:05.889 fused_ordering(493) 00:15:05.889 fused_ordering(494) 00:15:05.889 fused_ordering(495) 00:15:05.889 fused_ordering(496) 00:15:05.889 fused_ordering(497) 00:15:05.889 fused_ordering(498) 00:15:05.889 fused_ordering(499) 00:15:05.889 fused_ordering(500) 00:15:05.889 fused_ordering(501) 00:15:05.889 fused_ordering(502) 00:15:05.889 fused_ordering(503) 00:15:05.889 fused_ordering(504) 00:15:05.889 fused_ordering(505) 00:15:05.889 fused_ordering(506) 00:15:05.889 fused_ordering(507) 00:15:05.889 fused_ordering(508) 00:15:05.889 fused_ordering(509) 00:15:05.889 fused_ordering(510) 00:15:05.889 fused_ordering(511) 00:15:05.889 fused_ordering(512) 00:15:05.889 fused_ordering(513) 00:15:05.889 fused_ordering(514) 00:15:05.889 fused_ordering(515) 00:15:05.889 fused_ordering(516) 00:15:05.889 fused_ordering(517) 00:15:05.889 fused_ordering(518) 00:15:05.889 fused_ordering(519) 00:15:05.889 fused_ordering(520) 00:15:05.889 fused_ordering(521) 00:15:05.889 fused_ordering(522) 00:15:05.889 fused_ordering(523) 00:15:05.889 fused_ordering(524) 00:15:05.889 fused_ordering(525) 00:15:05.889 fused_ordering(526) 00:15:05.889 fused_ordering(527) 00:15:05.889 fused_ordering(528) 00:15:05.889 fused_ordering(529) 00:15:05.889 fused_ordering(530) 00:15:05.889 fused_ordering(531) 00:15:05.889 fused_ordering(532) 00:15:05.889 fused_ordering(533) 00:15:05.889 fused_ordering(534) 00:15:05.889 fused_ordering(535) 00:15:05.889 fused_ordering(536) 00:15:05.889 fused_ordering(537) 00:15:05.889 fused_ordering(538) 00:15:05.889 fused_ordering(539) 00:15:05.889 fused_ordering(540) 00:15:05.889 fused_ordering(541) 00:15:05.889 fused_ordering(542) 00:15:05.889 fused_ordering(543) 00:15:05.889 fused_ordering(544) 00:15:05.889 fused_ordering(545) 00:15:05.889 fused_ordering(546) 00:15:05.889 fused_ordering(547) 00:15:05.889 fused_ordering(548) 00:15:05.889 fused_ordering(549) 00:15:05.889 fused_ordering(550) 00:15:05.889 fused_ordering(551) 00:15:05.889 fused_ordering(552) 00:15:05.889 fused_ordering(553) 00:15:05.889 fused_ordering(554) 00:15:05.889 fused_ordering(555) 00:15:05.889 fused_ordering(556) 00:15:05.889 fused_ordering(557) 00:15:05.889 fused_ordering(558) 00:15:05.889 fused_ordering(559) 00:15:05.889 fused_ordering(560) 00:15:05.889 fused_ordering(561) 00:15:05.889 fused_ordering(562) 00:15:05.889 fused_ordering(563) 00:15:05.889 fused_ordering(564) 00:15:05.889 fused_ordering(565) 00:15:05.889 fused_ordering(566) 00:15:05.889 fused_ordering(567) 00:15:05.889 fused_ordering(568) 00:15:05.889 fused_ordering(569) 00:15:05.889 fused_ordering(570) 00:15:05.889 fused_ordering(571) 00:15:05.889 fused_ordering(572) 00:15:05.889 fused_ordering(573) 00:15:05.889 fused_ordering(574) 00:15:05.889 fused_ordering(575) 00:15:05.889 fused_ordering(576) 00:15:05.889 fused_ordering(577) 00:15:05.889 fused_ordering(578) 00:15:05.889 fused_ordering(579) 00:15:05.889 fused_ordering(580) 00:15:05.889 fused_ordering(581) 00:15:05.889 fused_ordering(582) 00:15:05.889 fused_ordering(583) 00:15:05.889 fused_ordering(584) 00:15:05.889 fused_ordering(585) 00:15:05.889 fused_ordering(586) 00:15:05.889 fused_ordering(587) 00:15:05.889 fused_ordering(588) 00:15:05.889 fused_ordering(589) 00:15:05.889 fused_ordering(590) 00:15:05.889 fused_ordering(591) 00:15:05.889 fused_ordering(592) 00:15:05.889 fused_ordering(593) 00:15:05.889 fused_ordering(594) 00:15:05.889 fused_ordering(595) 00:15:05.889 fused_ordering(596) 00:15:05.889 fused_ordering(597) 00:15:05.889 fused_ordering(598) 00:15:05.889 fused_ordering(599) 00:15:05.889 fused_ordering(600) 00:15:05.889 fused_ordering(601) 00:15:05.889 fused_ordering(602) 00:15:05.889 fused_ordering(603) 00:15:05.889 fused_ordering(604) 00:15:05.889 fused_ordering(605) 00:15:05.889 fused_ordering(606) 00:15:05.889 fused_ordering(607) 00:15:05.889 fused_ordering(608) 00:15:05.889 fused_ordering(609) 00:15:05.889 fused_ordering(610) 00:15:05.889 fused_ordering(611) 00:15:05.889 fused_ordering(612) 00:15:05.889 fused_ordering(613) 00:15:05.889 fused_ordering(614) 00:15:05.889 fused_ordering(615) 00:15:06.834 fused_ordering(616) 00:15:06.834 fused_ordering(617) 00:15:06.834 fused_ordering(618) 00:15:06.834 fused_ordering(619) 00:15:06.834 fused_ordering(620) 00:15:06.834 fused_ordering(621) 00:15:06.834 fused_ordering(622) 00:15:06.834 fused_ordering(623) 00:15:06.834 fused_ordering(624) 00:15:06.834 fused_ordering(625) 00:15:06.834 fused_ordering(626) 00:15:06.834 fused_ordering(627) 00:15:06.834 fused_ordering(628) 00:15:06.834 fused_ordering(629) 00:15:06.834 fused_ordering(630) 00:15:06.834 fused_ordering(631) 00:15:06.834 fused_ordering(632) 00:15:06.834 fused_ordering(633) 00:15:06.834 fused_ordering(634) 00:15:06.834 fused_ordering(635) 00:15:06.834 fused_ordering(636) 00:15:06.834 fused_ordering(637) 00:15:06.834 fused_ordering(638) 00:15:06.834 fused_ordering(639) 00:15:06.834 fused_ordering(640) 00:15:06.834 fused_ordering(641) 00:15:06.834 fused_ordering(642) 00:15:06.834 fused_ordering(643) 00:15:06.834 fused_ordering(644) 00:15:06.834 fused_ordering(645) 00:15:06.834 fused_ordering(646) 00:15:06.834 fused_ordering(647) 00:15:06.834 fused_ordering(648) 00:15:06.834 fused_ordering(649) 00:15:06.834 fused_ordering(650) 00:15:06.834 fused_ordering(651) 00:15:06.834 fused_ordering(652) 00:15:06.834 fused_ordering(653) 00:15:06.834 fused_ordering(654) 00:15:06.834 fused_ordering(655) 00:15:06.834 fused_ordering(656) 00:15:06.834 fused_ordering(657) 00:15:06.834 fused_ordering(658) 00:15:06.834 fused_ordering(659) 00:15:06.834 fused_ordering(660) 00:15:06.834 fused_ordering(661) 00:15:06.834 fused_ordering(662) 00:15:06.834 fused_ordering(663) 00:15:06.834 fused_ordering(664) 00:15:06.834 fused_ordering(665) 00:15:06.834 fused_ordering(666) 00:15:06.834 fused_ordering(667) 00:15:06.834 fused_ordering(668) 00:15:06.834 fused_ordering(669) 00:15:06.834 fused_ordering(670) 00:15:06.834 fused_ordering(671) 00:15:06.834 fused_ordering(672) 00:15:06.834 fused_ordering(673) 00:15:06.834 fused_ordering(674) 00:15:06.834 fused_ordering(675) 00:15:06.834 fused_ordering(676) 00:15:06.834 fused_ordering(677) 00:15:06.834 fused_ordering(678) 00:15:06.834 fused_ordering(679) 00:15:06.834 fused_ordering(680) 00:15:06.834 fused_ordering(681) 00:15:06.834 fused_ordering(682) 00:15:06.834 fused_ordering(683) 00:15:06.834 fused_ordering(684) 00:15:06.834 fused_ordering(685) 00:15:06.834 fused_ordering(686) 00:15:06.834 fused_ordering(687) 00:15:06.834 fused_ordering(688) 00:15:06.834 fused_ordering(689) 00:15:06.834 fused_ordering(690) 00:15:06.834 fused_ordering(691) 00:15:06.834 fused_ordering(692) 00:15:06.834 fused_ordering(693) 00:15:06.834 fused_ordering(694) 00:15:06.834 fused_ordering(695) 00:15:06.834 fused_ordering(696) 00:15:06.834 fused_ordering(697) 00:15:06.834 fused_ordering(698) 00:15:06.834 fused_ordering(699) 00:15:06.834 fused_ordering(700) 00:15:06.834 fused_ordering(701) 00:15:06.834 fused_ordering(702) 00:15:06.834 fused_ordering(703) 00:15:06.834 fused_ordering(704) 00:15:06.834 fused_ordering(705) 00:15:06.834 fused_ordering(706) 00:15:06.834 fused_ordering(707) 00:15:06.834 fused_ordering(708) 00:15:06.834 fused_ordering(709) 00:15:06.834 fused_ordering(710) 00:15:06.834 fused_ordering(711) 00:15:06.834 fused_ordering(712) 00:15:06.834 fused_ordering(713) 00:15:06.834 fused_ordering(714) 00:15:06.834 fused_ordering(715) 00:15:06.834 fused_ordering(716) 00:15:06.834 fused_ordering(717) 00:15:06.834 fused_ordering(718) 00:15:06.834 fused_ordering(719) 00:15:06.834 fused_ordering(720) 00:15:06.834 fused_ordering(721) 00:15:06.834 fused_ordering(722) 00:15:06.834 fused_ordering(723) 00:15:06.834 fused_ordering(724) 00:15:06.834 fused_ordering(725) 00:15:06.834 fused_ordering(726) 00:15:06.834 fused_ordering(727) 00:15:06.834 fused_ordering(728) 00:15:06.834 fused_ordering(729) 00:15:06.834 fused_ordering(730) 00:15:06.834 fused_ordering(731) 00:15:06.834 fused_ordering(732) 00:15:06.834 fused_ordering(733) 00:15:06.834 fused_ordering(734) 00:15:06.834 fused_ordering(735) 00:15:06.834 fused_ordering(736) 00:15:06.834 fused_ordering(737) 00:15:06.834 fused_ordering(738) 00:15:06.834 fused_ordering(739) 00:15:06.834 fused_ordering(740) 00:15:06.834 fused_ordering(741) 00:15:06.834 fused_ordering(742) 00:15:06.834 fused_ordering(743) 00:15:06.834 fused_ordering(744) 00:15:06.834 fused_ordering(745) 00:15:06.834 fused_ordering(746) 00:15:06.834 fused_ordering(747) 00:15:06.834 fused_ordering(748) 00:15:06.834 fused_ordering(749) 00:15:06.834 fused_ordering(750) 00:15:06.834 fused_ordering(751) 00:15:06.834 fused_ordering(752) 00:15:06.834 fused_ordering(753) 00:15:06.834 fused_ordering(754) 00:15:06.834 fused_ordering(755) 00:15:06.834 fused_ordering(756) 00:15:06.834 fused_ordering(757) 00:15:06.834 fused_ordering(758) 00:15:06.834 fused_ordering(759) 00:15:06.834 fused_ordering(760) 00:15:06.834 fused_ordering(761) 00:15:06.834 fused_ordering(762) 00:15:06.834 fused_ordering(763) 00:15:06.834 fused_ordering(764) 00:15:06.834 fused_ordering(765) 00:15:06.834 fused_ordering(766) 00:15:06.834 fused_ordering(767) 00:15:06.834 fused_ordering(768) 00:15:06.834 fused_ordering(769) 00:15:06.834 fused_ordering(770) 00:15:06.834 fused_ordering(771) 00:15:06.834 fused_ordering(772) 00:15:06.834 fused_ordering(773) 00:15:06.834 fused_ordering(774) 00:15:06.834 fused_ordering(775) 00:15:06.834 fused_ordering(776) 00:15:06.834 fused_ordering(777) 00:15:06.834 fused_ordering(778) 00:15:06.834 fused_ordering(779) 00:15:06.834 fused_ordering(780) 00:15:06.834 fused_ordering(781) 00:15:06.834 fused_ordering(782) 00:15:06.834 fused_ordering(783) 00:15:06.834 fused_ordering(784) 00:15:06.834 fused_ordering(785) 00:15:06.834 fused_ordering(786) 00:15:06.834 fused_ordering(787) 00:15:06.834 fused_ordering(788) 00:15:06.834 fused_ordering(789) 00:15:06.834 fused_ordering(790) 00:15:06.834 fused_ordering(791) 00:15:06.834 fused_ordering(792) 00:15:06.834 fused_ordering(793) 00:15:06.834 fused_ordering(794) 00:15:06.834 fused_ordering(795) 00:15:06.834 fused_ordering(796) 00:15:06.834 fused_ordering(797) 00:15:06.834 fused_ordering(798) 00:15:06.834 fused_ordering(799) 00:15:06.834 fused_ordering(800) 00:15:06.834 fused_ordering(801) 00:15:06.834 fused_ordering(802) 00:15:06.834 fused_ordering(803) 00:15:06.834 fused_ordering(804) 00:15:06.834 fused_ordering(805) 00:15:06.834 fused_ordering(806) 00:15:06.834 fused_ordering(807) 00:15:06.834 fused_ordering(808) 00:15:06.834 fused_ordering(809) 00:15:06.834 fused_ordering(810) 00:15:06.834 fused_ordering(811) 00:15:06.834 fused_ordering(812) 00:15:06.834 fused_ordering(813) 00:15:06.834 fused_ordering(814) 00:15:06.834 fused_ordering(815) 00:15:06.834 fused_ordering(816) 00:15:06.834 fused_ordering(817) 00:15:06.834 fused_ordering(818) 00:15:06.834 fused_ordering(819) 00:15:06.834 fused_ordering(820) 00:15:07.409 fused_ordering(821) 00:15:07.409 fused_ordering(822) 00:15:07.409 fused_ordering(823) 00:15:07.409 fused_ordering(824) 00:15:07.409 fused_ordering(825) 00:15:07.409 fused_ordering(826) 00:15:07.409 fused_ordering(827) 00:15:07.409 fused_ordering(828) 00:15:07.409 fused_ordering(829) 00:15:07.409 fused_ordering(830) 00:15:07.409 fused_ordering(831) 00:15:07.409 fused_ordering(832) 00:15:07.409 fused_ordering(833) 00:15:07.409 fused_ordering(834) 00:15:07.409 fused_ordering(835) 00:15:07.409 fused_ordering(836) 00:15:07.409 fused_ordering(837) 00:15:07.409 fused_ordering(838) 00:15:07.409 fused_ordering(839) 00:15:07.409 fused_ordering(840) 00:15:07.409 fused_ordering(841) 00:15:07.409 fused_ordering(842) 00:15:07.409 fused_ordering(843) 00:15:07.409 fused_ordering(844) 00:15:07.409 fused_ordering(845) 00:15:07.409 fused_ordering(846) 00:15:07.409 fused_ordering(847) 00:15:07.409 fused_ordering(848) 00:15:07.409 fused_ordering(849) 00:15:07.409 fused_ordering(850) 00:15:07.409 fused_ordering(851) 00:15:07.409 fused_ordering(852) 00:15:07.409 fused_ordering(853) 00:15:07.409 fused_ordering(854) 00:15:07.409 fused_ordering(855) 00:15:07.409 fused_ordering(856) 00:15:07.409 fused_ordering(857) 00:15:07.409 fused_ordering(858) 00:15:07.409 fused_ordering(859) 00:15:07.409 fused_ordering(860) 00:15:07.409 fused_ordering(861) 00:15:07.409 fused_ordering(862) 00:15:07.409 fused_ordering(863) 00:15:07.409 fused_ordering(864) 00:15:07.409 fused_ordering(865) 00:15:07.409 fused_ordering(866) 00:15:07.409 fused_ordering(867) 00:15:07.409 fused_ordering(868) 00:15:07.409 fused_ordering(869) 00:15:07.409 fused_ordering(870) 00:15:07.409 fused_ordering(871) 00:15:07.409 fused_ordering(872) 00:15:07.409 fused_ordering(873) 00:15:07.409 fused_ordering(874) 00:15:07.409 fused_ordering(875) 00:15:07.409 fused_ordering(876) 00:15:07.409 fused_ordering(877) 00:15:07.409 fused_ordering(878) 00:15:07.409 fused_ordering(879) 00:15:07.409 fused_ordering(880) 00:15:07.409 fused_ordering(881) 00:15:07.409 fused_ordering(882) 00:15:07.409 fused_ordering(883) 00:15:07.409 fused_ordering(884) 00:15:07.409 fused_ordering(885) 00:15:07.409 fused_ordering(886) 00:15:07.409 fused_ordering(887) 00:15:07.409 fused_ordering(888) 00:15:07.409 fused_ordering(889) 00:15:07.409 fused_ordering(890) 00:15:07.409 fused_ordering(891) 00:15:07.409 fused_ordering(892) 00:15:07.409 fused_ordering(893) 00:15:07.409 fused_ordering(894) 00:15:07.409 fused_ordering(895) 00:15:07.409 fused_ordering(896) 00:15:07.409 fused_ordering(897) 00:15:07.409 fused_ordering(898) 00:15:07.409 fused_ordering(899) 00:15:07.409 fused_ordering(900) 00:15:07.409 fused_ordering(901) 00:15:07.409 fused_ordering(902) 00:15:07.409 fused_ordering(903) 00:15:07.409 fused_ordering(904) 00:15:07.409 fused_ordering(905) 00:15:07.409 fused_ordering(906) 00:15:07.409 fused_ordering(907) 00:15:07.409 fused_ordering(908) 00:15:07.409 fused_ordering(909) 00:15:07.409 fused_ordering(910) 00:15:07.409 fused_ordering(911) 00:15:07.409 fused_ordering(912) 00:15:07.409 fused_ordering(913) 00:15:07.409 fused_ordering(914) 00:15:07.409 fused_ordering(915) 00:15:07.409 fused_ordering(916) 00:15:07.409 fused_ordering(917) 00:15:07.409 fused_ordering(918) 00:15:07.409 fused_ordering(919) 00:15:07.409 fused_ordering(920) 00:15:07.409 fused_ordering(921) 00:15:07.409 fused_ordering(922) 00:15:07.409 fused_ordering(923) 00:15:07.409 fused_ordering(924) 00:15:07.409 fused_ordering(925) 00:15:07.409 fused_ordering(926) 00:15:07.409 fused_ordering(927) 00:15:07.409 fused_ordering(928) 00:15:07.409 fused_ordering(929) 00:15:07.409 fused_ordering(930) 00:15:07.409 fused_ordering(931) 00:15:07.409 fused_ordering(932) 00:15:07.409 fused_ordering(933) 00:15:07.409 fused_ordering(934) 00:15:07.409 fused_ordering(935) 00:15:07.409 fused_ordering(936) 00:15:07.409 fused_ordering(937) 00:15:07.409 fused_ordering(938) 00:15:07.409 fused_ordering(939) 00:15:07.409 fused_ordering(940) 00:15:07.409 fused_ordering(941) 00:15:07.409 fused_ordering(942) 00:15:07.409 fused_ordering(943) 00:15:07.409 fused_ordering(944) 00:15:07.409 fused_ordering(945) 00:15:07.409 fused_ordering(946) 00:15:07.409 fused_ordering(947) 00:15:07.410 fused_ordering(948) 00:15:07.410 fused_ordering(949) 00:15:07.410 fused_ordering(950) 00:15:07.410 fused_ordering(951) 00:15:07.410 fused_ordering(952) 00:15:07.410 fused_ordering(953) 00:15:07.410 fused_ordering(954) 00:15:07.410 fused_ordering(955) 00:15:07.410 fused_ordering(956) 00:15:07.410 fused_ordering(957) 00:15:07.410 fused_ordering(958) 00:15:07.410 fused_ordering(959) 00:15:07.410 fused_ordering(960) 00:15:07.410 fused_ordering(961) 00:15:07.410 fused_ordering(962) 00:15:07.410 fused_ordering(963) 00:15:07.410 fused_ordering(964) 00:15:07.410 fused_ordering(965) 00:15:07.410 fused_ordering(966) 00:15:07.410 fused_ordering(967) 00:15:07.410 fused_ordering(968) 00:15:07.410 fused_ordering(969) 00:15:07.410 fused_ordering(970) 00:15:07.410 fused_ordering(971) 00:15:07.410 fused_ordering(972) 00:15:07.410 fused_ordering(973) 00:15:07.410 fused_ordering(974) 00:15:07.410 fused_ordering(975) 00:15:07.410 fused_ordering(976) 00:15:07.410 fused_ordering(977) 00:15:07.410 fused_ordering(978) 00:15:07.410 fused_ordering(979) 00:15:07.410 fused_ordering(980) 00:15:07.410 fused_ordering(981) 00:15:07.410 fused_ordering(982) 00:15:07.410 fused_ordering(983) 00:15:07.410 fused_ordering(984) 00:15:07.410 fused_ordering(985) 00:15:07.410 fused_ordering(986) 00:15:07.410 fused_ordering(987) 00:15:07.410 fused_ordering(988) 00:15:07.410 fused_ordering(989) 00:15:07.410 fused_ordering(990) 00:15:07.410 fused_ordering(991) 00:15:07.410 fused_ordering(992) 00:15:07.410 fused_ordering(993) 00:15:07.410 fused_ordering(994) 00:15:07.410 fused_ordering(995) 00:15:07.410 fused_ordering(996) 00:15:07.410 fused_ordering(997) 00:15:07.410 fused_ordering(998) 00:15:07.410 fused_ordering(999) 00:15:07.410 fused_ordering(1000) 00:15:07.410 fused_ordering(1001) 00:15:07.410 fused_ordering(1002) 00:15:07.410 fused_ordering(1003) 00:15:07.410 fused_ordering(1004) 00:15:07.410 fused_ordering(1005) 00:15:07.410 fused_ordering(1006) 00:15:07.410 fused_ordering(1007) 00:15:07.410 fused_ordering(1008) 00:15:07.410 fused_ordering(1009) 00:15:07.410 fused_ordering(1010) 00:15:07.410 fused_ordering(1011) 00:15:07.410 fused_ordering(1012) 00:15:07.410 fused_ordering(1013) 00:15:07.410 fused_ordering(1014) 00:15:07.410 fused_ordering(1015) 00:15:07.410 fused_ordering(1016) 00:15:07.410 fused_ordering(1017) 00:15:07.410 fused_ordering(1018) 00:15:07.410 fused_ordering(1019) 00:15:07.410 fused_ordering(1020) 00:15:07.410 fused_ordering(1021) 00:15:07.410 fused_ordering(1022) 00:15:07.410 fused_ordering(1023) 00:15:07.672 22:57:35 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:07.672 22:57:35 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:07.672 22:57:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:07.672 22:57:35 -- nvmf/common.sh@116 -- # sync 00:15:07.672 22:57:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:07.672 22:57:35 -- nvmf/common.sh@119 -- # set +e 00:15:07.672 22:57:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:07.672 22:57:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:07.672 rmmod nvme_tcp 00:15:07.672 rmmod nvme_fabrics 00:15:07.672 rmmod nvme_keyring 00:15:07.672 22:57:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:07.672 22:57:35 -- nvmf/common.sh@123 -- # set -e 00:15:07.672 22:57:35 -- nvmf/common.sh@124 -- # return 0 00:15:07.672 22:57:35 -- nvmf/common.sh@477 -- # '[' -n 4030333 ']' 00:15:07.672 22:57:35 -- nvmf/common.sh@478 -- # killprocess 4030333 00:15:07.672 22:57:35 -- common/autotest_common.sh@926 -- # '[' -z 4030333 ']' 00:15:07.672 22:57:35 -- common/autotest_common.sh@930 -- # kill -0 4030333 00:15:07.672 22:57:35 -- common/autotest_common.sh@931 -- # uname 00:15:07.672 22:57:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:07.672 22:57:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4030333 00:15:07.672 22:57:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:07.672 22:57:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:07.672 22:57:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4030333' 00:15:07.672 killing process with pid 4030333 00:15:07.672 22:57:35 -- common/autotest_common.sh@945 -- # kill 4030333 00:15:07.672 22:57:35 -- common/autotest_common.sh@950 -- # wait 4030333 00:15:07.672 22:57:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:07.672 22:57:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:07.672 22:57:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:07.672 22:57:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.672 22:57:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:07.672 22:57:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.672 22:57:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.672 22:57:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.250 22:57:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:10.250 00:15:10.250 real 0m14.532s 00:15:10.250 user 0m8.972s 00:15:10.250 sys 0m7.938s 00:15:10.250 22:57:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.250 22:57:37 -- common/autotest_common.sh@10 -- # set +x 00:15:10.250 ************************************ 00:15:10.250 END TEST nvmf_fused_ordering 00:15:10.250 ************************************ 00:15:10.250 22:57:37 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:10.250 22:57:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:10.250 22:57:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:10.250 22:57:37 -- common/autotest_common.sh@10 -- # set +x 00:15:10.250 ************************************ 00:15:10.250 START TEST nvmf_delete_subsystem 00:15:10.250 ************************************ 00:15:10.250 22:57:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:10.250 * Looking for test storage... 00:15:10.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:10.251 22:57:38 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.251 22:57:38 -- nvmf/common.sh@7 -- # uname -s 00:15:10.251 22:57:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.251 22:57:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.251 22:57:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.251 22:57:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.251 22:57:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.251 22:57:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.251 22:57:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.251 22:57:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.251 22:57:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.251 22:57:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.251 22:57:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:10.251 22:57:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:10.251 22:57:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.251 22:57:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.251 22:57:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.251 22:57:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.251 22:57:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.251 22:57:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.251 22:57:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.251 22:57:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.251 22:57:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.251 22:57:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.251 22:57:38 -- paths/export.sh@5 -- # export PATH 00:15:10.251 22:57:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.251 22:57:38 -- nvmf/common.sh@46 -- # : 0 00:15:10.251 22:57:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:10.251 22:57:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:10.251 22:57:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:10.251 22:57:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.251 22:57:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.251 22:57:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:10.251 22:57:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:10.251 22:57:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:10.251 22:57:38 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:10.251 22:57:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:10.251 22:57:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.251 22:57:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:10.251 22:57:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:10.251 22:57:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:10.251 22:57:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.251 22:57:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.251 22:57:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.251 22:57:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:10.251 22:57:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:10.251 22:57:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:10.251 22:57:38 -- common/autotest_common.sh@10 -- # set +x 00:15:16.860 22:57:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:16.860 22:57:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:16.860 22:57:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:16.860 22:57:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:16.860 22:57:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:16.860 22:57:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:16.860 22:57:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:16.860 22:57:44 -- nvmf/common.sh@294 -- # net_devs=() 00:15:16.860 22:57:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:16.860 22:57:44 -- nvmf/common.sh@295 -- # e810=() 00:15:16.860 22:57:44 -- nvmf/common.sh@295 -- # local -ga e810 00:15:16.860 22:57:44 -- nvmf/common.sh@296 -- # x722=() 00:15:16.860 22:57:44 -- nvmf/common.sh@296 -- # local -ga x722 00:15:16.860 22:57:44 -- nvmf/common.sh@297 -- # mlx=() 00:15:16.860 22:57:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:16.860 22:57:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:16.860 22:57:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:16.860 22:57:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:16.860 22:57:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:16.860 22:57:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:16.860 22:57:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:16.860 22:57:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:16.860 22:57:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:16.860 22:57:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:16.860 22:57:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:16.860 22:57:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:16.860 22:57:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:16.860 22:57:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:16.860 22:57:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:16.860 22:57:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:16.860 22:57:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:16.860 22:57:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:16.860 22:57:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:16.860 22:57:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:16.860 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:16.860 22:57:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:16.860 22:57:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:16.860 22:57:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:16.860 22:57:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:16.860 22:57:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:16.860 22:57:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:16.860 22:57:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:16.860 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:16.860 22:57:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:16.860 22:57:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:16.860 22:57:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:16.860 22:57:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:16.860 22:57:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:16.860 22:57:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:16.861 22:57:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:16.861 22:57:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:16.861 22:57:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:16.861 22:57:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:16.861 22:57:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:16.861 22:57:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:16.861 22:57:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:16.861 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:16.861 22:57:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:16.861 22:57:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:16.861 22:57:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:16.861 22:57:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:16.861 22:57:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:16.861 22:57:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:16.861 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:16.861 22:57:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:16.861 22:57:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:16.861 22:57:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:16.861 22:57:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:16.861 22:57:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:16.861 22:57:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:16.861 22:57:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.861 22:57:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.861 22:57:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:16.861 22:57:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:16.861 22:57:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:16.861 22:57:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:16.861 22:57:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:16.861 22:57:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:16.861 22:57:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.861 22:57:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:16.861 22:57:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:16.861 22:57:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:16.861 22:57:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:16.861 22:57:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:16.861 22:57:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:16.861 22:57:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:16.861 22:57:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:16.861 22:57:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:16.861 22:57:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:16.861 22:57:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:16.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:16.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:15:16.861 00:15:16.861 --- 10.0.0.2 ping statistics --- 00:15:16.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.861 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:15:16.861 22:57:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:16.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:16.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.428 ms 00:15:16.861 00:15:16.861 --- 10.0.0.1 ping statistics --- 00:15:16.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.861 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:15:16.861 22:57:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:16.861 22:57:45 -- nvmf/common.sh@410 -- # return 0 00:15:16.861 22:57:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:16.861 22:57:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:16.861 22:57:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:16.861 22:57:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:16.861 22:57:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:16.861 22:57:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:16.861 22:57:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:17.122 22:57:45 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:17.122 22:57:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:17.122 22:57:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:17.122 22:57:45 -- common/autotest_common.sh@10 -- # set +x 00:15:17.122 22:57:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:17.122 22:57:45 -- nvmf/common.sh@469 -- # nvmfpid=4035380 00:15:17.122 22:57:45 -- nvmf/common.sh@470 -- # waitforlisten 4035380 00:15:17.122 22:57:45 -- common/autotest_common.sh@819 -- # '[' -z 4035380 ']' 00:15:17.122 22:57:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.122 22:57:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:17.122 22:57:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.122 22:57:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:17.122 22:57:45 -- common/autotest_common.sh@10 -- # set +x 00:15:17.122 [2024-06-09 22:57:45.099739] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:17.122 [2024-06-09 22:57:45.099795] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.122 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.122 [2024-06-09 22:57:45.165629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:17.123 [2024-06-09 22:57:45.228968] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:17.123 [2024-06-09 22:57:45.229085] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.123 [2024-06-09 22:57:45.229093] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.123 [2024-06-09 22:57:45.229100] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.123 [2024-06-09 22:57:45.229206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.123 [2024-06-09 22:57:45.229211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.696 22:57:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:17.696 22:57:45 -- common/autotest_common.sh@852 -- # return 0 00:15:17.696 22:57:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:17.696 22:57:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:17.696 22:57:45 -- common/autotest_common.sh@10 -- # set +x 00:15:17.958 22:57:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.958 22:57:45 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:17.958 22:57:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.958 22:57:45 -- common/autotest_common.sh@10 -- # set +x 00:15:17.958 [2024-06-09 22:57:45.912705] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.958 22:57:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.958 22:57:45 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:17.958 22:57:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.958 22:57:45 -- common/autotest_common.sh@10 -- # set +x 00:15:17.958 22:57:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.958 22:57:45 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.958 22:57:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.958 22:57:45 -- common/autotest_common.sh@10 -- # set +x 00:15:17.958 [2024-06-09 22:57:45.936871] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.958 22:57:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.958 22:57:45 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:17.958 22:57:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.958 22:57:45 -- common/autotest_common.sh@10 -- # set +x 00:15:17.958 NULL1 00:15:17.958 22:57:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.958 22:57:45 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:17.958 22:57:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.958 22:57:45 -- common/autotest_common.sh@10 -- # set +x 00:15:17.958 Delay0 00:15:17.958 22:57:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.958 22:57:45 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:17.958 22:57:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:17.958 22:57:45 -- common/autotest_common.sh@10 -- # set +x 00:15:17.958 22:57:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:17.958 22:57:45 -- target/delete_subsystem.sh@28 -- # perf_pid=4035731 00:15:17.958 22:57:45 -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:17.958 22:57:45 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:17.958 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.958 [2024-06-09 22:57:46.033513] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:19.875 22:57:47 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.875 22:57:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:19.875 22:57:47 -- common/autotest_common.sh@10 -- # set +x 00:15:20.136 Read completed with error (sct=0, sc=8) 00:15:20.136 Write completed with error (sct=0, sc=8) 00:15:20.136 Read completed with error (sct=0, sc=8) 00:15:20.136 Read completed with error (sct=0, sc=8) 00:15:20.136 starting I/O failed: -6 00:15:20.136 Write completed with error (sct=0, sc=8) 00:15:20.136 Read completed with error (sct=0, sc=8) 00:15:20.136 Read completed with error (sct=0, sc=8) 00:15:20.136 Read completed with error (sct=0, sc=8) 00:15:20.136 starting I/O failed: -6 00:15:20.136 Read completed with error (sct=0, sc=8) 00:15:20.136 Read completed with error (sct=0, sc=8) 00:15:20.136 Read completed with error (sct=0, sc=8) 00:15:20.136 Write completed with error (sct=0, sc=8) 00:15:20.136 starting I/O failed: -6 00:15:20.136 Write completed with error (sct=0, sc=8) 00:15:20.136 Read completed with error (sct=0, sc=8) 00:15:20.136 Read completed with error (sct=0, sc=8) 00:15:20.136 Write completed with error (sct=0, sc=8) 00:15:20.136 starting I/O failed: -6 00:15:20.136 Read completed with error (sct=0, sc=8) 00:15:20.136 Write completed with error (sct=0, sc=8) 00:15:20.136 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 [2024-06-09 22:57:48.199464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd8f0 is same with the state(5) to be set 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 starting I/O failed: -6 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 [2024-06-09 22:57:48.203392] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2264000c00 is same with the state(5) to be set 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Write completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.137 Read completed with error (sct=0, sc=8) 00:15:20.138 Read completed with error (sct=0, sc=8) 00:15:20.138 Write completed with error (sct=0, sc=8) 00:15:20.138 Read completed with error (sct=0, sc=8) 00:15:20.138 Write completed with error (sct=0, sc=8) 00:15:20.138 Write completed with error (sct=0, sc=8) 00:15:20.138 Read completed with error (sct=0, sc=8) 00:15:20.138 Read completed with error (sct=0, sc=8) 00:15:20.138 Read completed with error (sct=0, sc=8) 00:15:20.138 Read completed with error (sct=0, sc=8) 00:15:20.138 Write completed with error (sct=0, sc=8) 00:15:21.082 [2024-06-09 22:57:49.174529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17d4910 is same with the state(5) to be set 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Write completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Write completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Write completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Write completed with error (sct=0, sc=8) 00:15:21.082 Write completed with error (sct=0, sc=8) 00:15:21.082 Write completed with error (sct=0, sc=8) 00:15:21.082 Read completed with error (sct=0, sc=8) 00:15:21.082 Write completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 [2024-06-09 22:57:49.203679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dd640 is same with the state(5) to be set 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Write completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Write completed with error (sct=0, sc=8) 00:15:21.083 Write completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Write completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Write completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Write completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Write completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 [2024-06-09 22:57:49.203817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ddba0 is same with the state(5) to be set 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Write completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Write completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 [2024-06-09 22:57:49.206005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f226400bf20 is same with the state(5) to be set 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Write completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Write completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Write completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Write completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 Read completed with error (sct=0, sc=8) 00:15:21.083 [2024-06-09 22:57:49.206081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f226400c600 is same with the state(5) to be set 00:15:21.083 [2024-06-09 22:57:49.206651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17d4910 (9): Bad file descriptor 00:15:21.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:21.083 22:57:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.083 22:57:49 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:21.083 22:57:49 -- target/delete_subsystem.sh@35 -- # kill -0 4035731 00:15:21.083 22:57:49 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:21.083 Initializing NVMe Controllers 00:15:21.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:21.083 Controller IO queue size 128, less than required. 00:15:21.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:21.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:21.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:21.083 Initialization complete. Launching workers. 00:15:21.083 ======================================================== 00:15:21.083 Latency(us) 00:15:21.083 Device Information : IOPS MiB/s Average min max 00:15:21.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.76 0.09 883684.91 229.85 1007250.64 00:15:21.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.32 0.08 917636.00 290.49 1009947.68 00:15:21.083 ======================================================== 00:15:21.083 Total : 335.08 0.16 899928.97 229.85 1009947.68 00:15:21.083 00:15:21.656 22:57:49 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:21.656 22:57:49 -- target/delete_subsystem.sh@35 -- # kill -0 4035731 00:15:21.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4035731) - No such process 00:15:21.656 22:57:49 -- target/delete_subsystem.sh@45 -- # NOT wait 4035731 00:15:21.656 22:57:49 -- common/autotest_common.sh@640 -- # local es=0 00:15:21.657 22:57:49 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 4035731 00:15:21.657 22:57:49 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:21.657 22:57:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:21.657 22:57:49 -- common/autotest_common.sh@632 -- # type -t wait 00:15:21.657 22:57:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:21.657 22:57:49 -- common/autotest_common.sh@643 -- # wait 4035731 00:15:21.657 22:57:49 -- common/autotest_common.sh@643 -- # es=1 00:15:21.657 22:57:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:21.657 22:57:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:21.657 22:57:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:21.657 22:57:49 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:21.657 22:57:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.657 22:57:49 -- common/autotest_common.sh@10 -- # set +x 00:15:21.657 22:57:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.657 22:57:49 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.657 22:57:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.657 22:57:49 -- common/autotest_common.sh@10 -- # set +x 00:15:21.657 [2024-06-09 22:57:49.735871] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.657 22:57:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.657 22:57:49 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:21.657 22:57:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:21.657 22:57:49 -- common/autotest_common.sh@10 -- # set +x 00:15:21.657 22:57:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:21.657 22:57:49 -- target/delete_subsystem.sh@54 -- # perf_pid=4036411 00:15:21.657 22:57:49 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:21.657 22:57:49 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:21.657 22:57:49 -- target/delete_subsystem.sh@57 -- # kill -0 4036411 00:15:21.657 22:57:49 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:21.657 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.657 [2024-06-09 22:57:49.804775] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:22.228 22:57:50 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:22.229 22:57:50 -- target/delete_subsystem.sh@57 -- # kill -0 4036411 00:15:22.229 22:57:50 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:22.801 22:57:50 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:22.801 22:57:50 -- target/delete_subsystem.sh@57 -- # kill -0 4036411 00:15:22.801 22:57:50 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:23.372 22:57:51 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:23.372 22:57:51 -- target/delete_subsystem.sh@57 -- # kill -0 4036411 00:15:23.372 22:57:51 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:23.633 22:57:51 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:23.633 22:57:51 -- target/delete_subsystem.sh@57 -- # kill -0 4036411 00:15:23.633 22:57:51 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:24.205 22:57:52 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:24.205 22:57:52 -- target/delete_subsystem.sh@57 -- # kill -0 4036411 00:15:24.205 22:57:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:24.775 22:57:52 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:24.775 22:57:52 -- target/delete_subsystem.sh@57 -- # kill -0 4036411 00:15:24.775 22:57:52 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:24.775 Initializing NVMe Controllers 00:15:24.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:24.775 Controller IO queue size 128, less than required. 00:15:24.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:24.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:24.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:24.775 Initialization complete. Launching workers. 00:15:24.775 ======================================================== 00:15:24.775 Latency(us) 00:15:24.775 Device Information : IOPS MiB/s Average min max 00:15:24.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002714.59 1000158.29 1041805.90 00:15:24.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004365.63 1000631.64 1010940.09 00:15:24.775 ======================================================== 00:15:24.775 Total : 256.00 0.12 1003540.11 1000158.29 1041805.90 00:15:24.775 00:15:25.347 22:57:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:25.347 22:57:53 -- target/delete_subsystem.sh@57 -- # kill -0 4036411 00:15:25.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4036411) - No such process 00:15:25.347 22:57:53 -- target/delete_subsystem.sh@67 -- # wait 4036411 00:15:25.347 22:57:53 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:25.347 22:57:53 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:25.347 22:57:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:25.347 22:57:53 -- nvmf/common.sh@116 -- # sync 00:15:25.347 22:57:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:25.347 22:57:53 -- nvmf/common.sh@119 -- # set +e 00:15:25.347 22:57:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:25.347 22:57:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:25.347 rmmod nvme_tcp 00:15:25.347 rmmod nvme_fabrics 00:15:25.347 rmmod nvme_keyring 00:15:25.347 22:57:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:25.347 22:57:53 -- nvmf/common.sh@123 -- # set -e 00:15:25.347 22:57:53 -- nvmf/common.sh@124 -- # return 0 00:15:25.347 22:57:53 -- nvmf/common.sh@477 -- # '[' -n 4035380 ']' 00:15:25.347 22:57:53 -- nvmf/common.sh@478 -- # killprocess 4035380 00:15:25.347 22:57:53 -- common/autotest_common.sh@926 -- # '[' -z 4035380 ']' 00:15:25.347 22:57:53 -- common/autotest_common.sh@930 -- # kill -0 4035380 00:15:25.347 22:57:53 -- common/autotest_common.sh@931 -- # uname 00:15:25.347 22:57:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:25.347 22:57:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4035380 00:15:25.347 22:57:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:25.347 22:57:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:25.347 22:57:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4035380' 00:15:25.347 killing process with pid 4035380 00:15:25.347 22:57:53 -- common/autotest_common.sh@945 -- # kill 4035380 00:15:25.347 22:57:53 -- common/autotest_common.sh@950 -- # wait 4035380 00:15:25.608 22:57:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:25.608 22:57:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:25.608 22:57:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:25.608 22:57:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.608 22:57:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:25.608 22:57:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.608 22:57:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:25.608 22:57:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.525 22:57:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:27.525 00:15:27.525 real 0m17.655s 00:15:27.525 user 0m30.702s 00:15:27.525 sys 0m5.996s 00:15:27.525 22:57:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.525 22:57:55 -- common/autotest_common.sh@10 -- # set +x 00:15:27.525 ************************************ 00:15:27.525 END TEST nvmf_delete_subsystem 00:15:27.525 ************************************ 00:15:27.525 22:57:55 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:27.525 22:57:55 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:27.525 22:57:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:27.525 22:57:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:27.525 22:57:55 -- common/autotest_common.sh@10 -- # set +x 00:15:27.525 ************************************ 00:15:27.525 START TEST nvmf_nvme_cli 00:15:27.525 ************************************ 00:15:27.525 22:57:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:27.786 * Looking for test storage... 00:15:27.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.786 22:57:55 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.786 22:57:55 -- nvmf/common.sh@7 -- # uname -s 00:15:27.786 22:57:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.786 22:57:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.786 22:57:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.786 22:57:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.786 22:57:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.786 22:57:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.786 22:57:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.786 22:57:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.786 22:57:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.786 22:57:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.786 22:57:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:27.786 22:57:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:27.786 22:57:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.786 22:57:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.786 22:57:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.786 22:57:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.786 22:57:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.786 22:57:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.786 22:57:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.786 22:57:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.786 22:57:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.786 22:57:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.787 22:57:55 -- paths/export.sh@5 -- # export PATH 00:15:27.787 22:57:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.787 22:57:55 -- nvmf/common.sh@46 -- # : 0 00:15:27.787 22:57:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:27.787 22:57:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:27.787 22:57:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:27.787 22:57:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.787 22:57:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.787 22:57:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:27.787 22:57:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:27.787 22:57:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:27.787 22:57:55 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:27.787 22:57:55 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:27.787 22:57:55 -- target/nvme_cli.sh@14 -- # devs=() 00:15:27.787 22:57:55 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:27.787 22:57:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:27.787 22:57:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.787 22:57:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:27.787 22:57:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:27.787 22:57:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:27.787 22:57:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.787 22:57:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.787 22:57:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.787 22:57:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:27.787 22:57:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:27.787 22:57:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:27.787 22:57:55 -- common/autotest_common.sh@10 -- # set +x 00:15:35.968 22:58:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:35.968 22:58:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:35.968 22:58:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:35.968 22:58:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:35.968 22:58:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:35.968 22:58:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:35.968 22:58:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:35.968 22:58:02 -- nvmf/common.sh@294 -- # net_devs=() 00:15:35.968 22:58:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:35.968 22:58:02 -- nvmf/common.sh@295 -- # e810=() 00:15:35.968 22:58:02 -- nvmf/common.sh@295 -- # local -ga e810 00:15:35.968 22:58:02 -- nvmf/common.sh@296 -- # x722=() 00:15:35.968 22:58:02 -- nvmf/common.sh@296 -- # local -ga x722 00:15:35.968 22:58:02 -- nvmf/common.sh@297 -- # mlx=() 00:15:35.968 22:58:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:35.968 22:58:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.968 22:58:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.968 22:58:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.968 22:58:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.968 22:58:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.968 22:58:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.968 22:58:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.968 22:58:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.968 22:58:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.968 22:58:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.968 22:58:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.968 22:58:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:35.968 22:58:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:35.968 22:58:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:35.968 22:58:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:35.968 22:58:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:35.968 22:58:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:35.968 22:58:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:35.968 22:58:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:35.968 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:35.968 22:58:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:35.968 22:58:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:35.968 22:58:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.968 22:58:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.968 22:58:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:35.968 22:58:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:35.968 22:58:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:35.968 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:35.968 22:58:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:35.968 22:58:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:35.968 22:58:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.968 22:58:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.969 22:58:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:35.969 22:58:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:35.969 22:58:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:35.969 22:58:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:35.969 22:58:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:35.969 22:58:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.969 22:58:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:35.969 22:58:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.969 22:58:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:35.969 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:35.969 22:58:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.969 22:58:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:35.969 22:58:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.969 22:58:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:35.969 22:58:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.969 22:58:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:35.969 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:35.969 22:58:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.969 22:58:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:35.969 22:58:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:35.969 22:58:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:35.969 22:58:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:35.969 22:58:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:35.969 22:58:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.969 22:58:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.969 22:58:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.969 22:58:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:35.969 22:58:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.969 22:58:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.969 22:58:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:35.969 22:58:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.969 22:58:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.969 22:58:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:35.969 22:58:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:35.969 22:58:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.969 22:58:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.969 22:58:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.969 22:58:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.969 22:58:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:35.969 22:58:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.969 22:58:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.969 22:58:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.969 22:58:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:35.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:15:35.969 00:15:35.969 --- 10.0.0.2 ping statistics --- 00:15:35.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.969 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:15:35.969 22:58:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.441 ms 00:15:35.969 00:15:35.969 --- 10.0.0.1 ping statistics --- 00:15:35.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.969 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:15:35.969 22:58:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.969 22:58:02 -- nvmf/common.sh@410 -- # return 0 00:15:35.969 22:58:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:35.969 22:58:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.969 22:58:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:35.969 22:58:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:35.969 22:58:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.969 22:58:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:35.969 22:58:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:35.969 22:58:02 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:35.969 22:58:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:35.969 22:58:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:35.969 22:58:02 -- common/autotest_common.sh@10 -- # set +x 00:15:35.969 22:58:02 -- nvmf/common.sh@469 -- # nvmfpid=4041356 00:15:35.969 22:58:02 -- nvmf/common.sh@470 -- # waitforlisten 4041356 00:15:35.969 22:58:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:35.969 22:58:02 -- common/autotest_common.sh@819 -- # '[' -z 4041356 ']' 00:15:35.969 22:58:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.969 22:58:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:35.969 22:58:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.969 22:58:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:35.969 22:58:02 -- common/autotest_common.sh@10 -- # set +x 00:15:35.969 [2024-06-09 22:58:03.052157] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:35.969 [2024-06-09 22:58:03.052245] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.969 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.969 [2024-06-09 22:58:03.124270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.969 [2024-06-09 22:58:03.197825] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:35.969 [2024-06-09 22:58:03.197958] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.969 [2024-06-09 22:58:03.197969] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.969 [2024-06-09 22:58:03.197977] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.969 [2024-06-09 22:58:03.198099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.969 [2024-06-09 22:58:03.198217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.969 [2024-06-09 22:58:03.198375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.969 [2024-06-09 22:58:03.198376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.969 22:58:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:35.969 22:58:03 -- common/autotest_common.sh@852 -- # return 0 00:15:35.969 22:58:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:35.969 22:58:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:35.969 22:58:03 -- common/autotest_common.sh@10 -- # set +x 00:15:35.969 22:58:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.969 22:58:03 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:35.969 22:58:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.969 22:58:03 -- common/autotest_common.sh@10 -- # set +x 00:15:35.969 [2024-06-09 22:58:03.875595] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.969 22:58:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.969 22:58:03 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:35.969 22:58:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.969 22:58:03 -- common/autotest_common.sh@10 -- # set +x 00:15:35.969 Malloc0 00:15:35.969 22:58:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.969 22:58:03 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:35.969 22:58:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.969 22:58:03 -- common/autotest_common.sh@10 -- # set +x 00:15:35.969 Malloc1 00:15:35.969 22:58:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.969 22:58:03 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:35.969 22:58:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.969 22:58:03 -- common/autotest_common.sh@10 -- # set +x 00:15:35.969 22:58:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.969 22:58:03 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:35.969 22:58:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.969 22:58:03 -- common/autotest_common.sh@10 -- # set +x 00:15:35.969 22:58:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.969 22:58:03 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:35.969 22:58:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.969 22:58:03 -- common/autotest_common.sh@10 -- # set +x 00:15:35.969 22:58:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.969 22:58:03 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.970 22:58:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.970 22:58:03 -- common/autotest_common.sh@10 -- # set +x 00:15:35.970 [2024-06-09 22:58:03.965436] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.970 22:58:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.970 22:58:03 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:35.970 22:58:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.970 22:58:03 -- common/autotest_common.sh@10 -- # set +x 00:15:35.970 22:58:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.970 22:58:03 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:35.970 00:15:35.970 Discovery Log Number of Records 2, Generation counter 2 00:15:35.970 =====Discovery Log Entry 0====== 00:15:35.970 trtype: tcp 00:15:35.970 adrfam: ipv4 00:15:35.970 subtype: current discovery subsystem 00:15:35.970 treq: not required 00:15:35.970 portid: 0 00:15:35.970 trsvcid: 4420 00:15:35.970 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:35.970 traddr: 10.0.0.2 00:15:35.970 eflags: explicit discovery connections, duplicate discovery information 00:15:35.970 sectype: none 00:15:35.970 =====Discovery Log Entry 1====== 00:15:35.970 trtype: tcp 00:15:35.970 adrfam: ipv4 00:15:35.970 subtype: nvme subsystem 00:15:35.970 treq: not required 00:15:35.970 portid: 0 00:15:35.970 trsvcid: 4420 00:15:35.970 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:35.970 traddr: 10.0.0.2 00:15:35.970 eflags: none 00:15:35.970 sectype: none 00:15:35.970 22:58:04 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:35.970 22:58:04 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:35.970 22:58:04 -- nvmf/common.sh@510 -- # local dev _ 00:15:35.970 22:58:04 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:35.970 22:58:04 -- nvmf/common.sh@509 -- # nvme list 00:15:35.970 22:58:04 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:35.970 22:58:04 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:35.970 22:58:04 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:35.970 22:58:04 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:35.970 22:58:04 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:35.970 22:58:04 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:37.890 22:58:05 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:37.890 22:58:05 -- common/autotest_common.sh@1177 -- # local i=0 00:15:37.890 22:58:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:37.890 22:58:05 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:15:37.890 22:58:05 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:15:37.890 22:58:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:39.810 22:58:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:39.810 22:58:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:39.810 22:58:07 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:39.810 22:58:07 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:15:39.810 22:58:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:39.810 22:58:07 -- common/autotest_common.sh@1187 -- # return 0 00:15:39.810 22:58:07 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:39.810 22:58:07 -- nvmf/common.sh@510 -- # local dev _ 00:15:39.810 22:58:07 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:39.810 22:58:07 -- nvmf/common.sh@509 -- # nvme list 00:15:39.810 22:58:07 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:39.810 22:58:07 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:39.810 22:58:07 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:39.810 22:58:07 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:39.810 22:58:07 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:39.810 22:58:07 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:39.810 22:58:07 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:39.810 22:58:07 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:39.810 22:58:07 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:39.810 22:58:07 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:39.810 22:58:07 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:39.810 /dev/nvme0n1 ]] 00:15:39.810 22:58:07 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:39.810 22:58:07 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:39.810 22:58:07 -- nvmf/common.sh@510 -- # local dev _ 00:15:39.810 22:58:07 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:39.810 22:58:07 -- nvmf/common.sh@509 -- # nvme list 00:15:39.810 22:58:07 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:39.810 22:58:07 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:40.072 22:58:07 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:40.072 22:58:07 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:40.072 22:58:07 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:40.072 22:58:07 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:40.072 22:58:07 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:40.072 22:58:07 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:40.072 22:58:07 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:40.072 22:58:07 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:40.072 22:58:07 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:40.072 22:58:07 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:40.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.334 22:58:08 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:40.334 22:58:08 -- common/autotest_common.sh@1198 -- # local i=0 00:15:40.334 22:58:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:40.334 22:58:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.334 22:58:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:40.334 22:58:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.334 22:58:08 -- common/autotest_common.sh@1210 -- # return 0 00:15:40.334 22:58:08 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:40.334 22:58:08 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:40.334 22:58:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:40.334 22:58:08 -- common/autotest_common.sh@10 -- # set +x 00:15:40.334 22:58:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:40.334 22:58:08 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:40.334 22:58:08 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:40.334 22:58:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:40.334 22:58:08 -- nvmf/common.sh@116 -- # sync 00:15:40.334 22:58:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:40.334 22:58:08 -- nvmf/common.sh@119 -- # set +e 00:15:40.334 22:58:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:40.334 22:58:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:40.334 rmmod nvme_tcp 00:15:40.334 rmmod nvme_fabrics 00:15:40.334 rmmod nvme_keyring 00:15:40.334 22:58:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:40.334 22:58:08 -- nvmf/common.sh@123 -- # set -e 00:15:40.334 22:58:08 -- nvmf/common.sh@124 -- # return 0 00:15:40.334 22:58:08 -- nvmf/common.sh@477 -- # '[' -n 4041356 ']' 00:15:40.334 22:58:08 -- nvmf/common.sh@478 -- # killprocess 4041356 00:15:40.334 22:58:08 -- common/autotest_common.sh@926 -- # '[' -z 4041356 ']' 00:15:40.334 22:58:08 -- common/autotest_common.sh@930 -- # kill -0 4041356 00:15:40.334 22:58:08 -- common/autotest_common.sh@931 -- # uname 00:15:40.334 22:58:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:40.334 22:58:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4041356 00:15:40.334 22:58:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:40.334 22:58:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:40.334 22:58:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4041356' 00:15:40.334 killing process with pid 4041356 00:15:40.334 22:58:08 -- common/autotest_common.sh@945 -- # kill 4041356 00:15:40.334 22:58:08 -- common/autotest_common.sh@950 -- # wait 4041356 00:15:40.596 22:58:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:40.596 22:58:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:40.596 22:58:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:40.596 22:58:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:40.596 22:58:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:40.596 22:58:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.596 22:58:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.596 22:58:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.513 22:58:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:42.513 00:15:42.513 real 0m15.016s 00:15:42.513 user 0m23.703s 00:15:42.513 sys 0m5.858s 00:15:42.513 22:58:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:42.513 22:58:10 -- common/autotest_common.sh@10 -- # set +x 00:15:42.513 ************************************ 00:15:42.513 END TEST nvmf_nvme_cli 00:15:42.513 ************************************ 00:15:42.774 22:58:10 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:15:42.774 22:58:10 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:42.774 22:58:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:42.774 22:58:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:42.774 22:58:10 -- common/autotest_common.sh@10 -- # set +x 00:15:42.774 ************************************ 00:15:42.774 START TEST nvmf_host_management 00:15:42.774 ************************************ 00:15:42.774 22:58:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:42.774 * Looking for test storage... 00:15:42.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:42.774 22:58:10 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.774 22:58:10 -- nvmf/common.sh@7 -- # uname -s 00:15:42.774 22:58:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.774 22:58:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.774 22:58:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.774 22:58:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.774 22:58:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.774 22:58:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.774 22:58:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.774 22:58:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.774 22:58:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.774 22:58:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.774 22:58:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:42.774 22:58:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:42.774 22:58:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.774 22:58:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.774 22:58:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:42.774 22:58:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:42.774 22:58:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.774 22:58:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.774 22:58:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.774 22:58:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.774 22:58:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.774 22:58:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.774 22:58:10 -- paths/export.sh@5 -- # export PATH 00:15:42.774 22:58:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.774 22:58:10 -- nvmf/common.sh@46 -- # : 0 00:15:42.774 22:58:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:42.774 22:58:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:42.774 22:58:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:42.774 22:58:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.774 22:58:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.774 22:58:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:42.774 22:58:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:42.774 22:58:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:42.774 22:58:10 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:42.774 22:58:10 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:42.774 22:58:10 -- target/host_management.sh@104 -- # nvmftestinit 00:15:42.774 22:58:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:42.774 22:58:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.774 22:58:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:42.774 22:58:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:42.774 22:58:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:42.774 22:58:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.774 22:58:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.774 22:58:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.774 22:58:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:42.774 22:58:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:42.774 22:58:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:42.774 22:58:10 -- common/autotest_common.sh@10 -- # set +x 00:15:50.934 22:58:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:50.934 22:58:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:50.934 22:58:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:50.934 22:58:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:50.934 22:58:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:50.934 22:58:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:50.934 22:58:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:50.934 22:58:17 -- nvmf/common.sh@294 -- # net_devs=() 00:15:50.934 22:58:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:50.934 22:58:17 -- nvmf/common.sh@295 -- # e810=() 00:15:50.934 22:58:17 -- nvmf/common.sh@295 -- # local -ga e810 00:15:50.934 22:58:17 -- nvmf/common.sh@296 -- # x722=() 00:15:50.934 22:58:17 -- nvmf/common.sh@296 -- # local -ga x722 00:15:50.934 22:58:17 -- nvmf/common.sh@297 -- # mlx=() 00:15:50.934 22:58:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:50.934 22:58:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.934 22:58:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.934 22:58:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.934 22:58:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.934 22:58:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.934 22:58:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.934 22:58:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.934 22:58:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.934 22:58:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.934 22:58:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.934 22:58:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.934 22:58:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:50.934 22:58:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:50.934 22:58:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:50.934 22:58:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:50.934 22:58:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:50.934 22:58:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:50.934 22:58:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:50.934 22:58:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:50.934 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:50.934 22:58:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:50.934 22:58:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:50.934 22:58:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.934 22:58:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.934 22:58:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:50.934 22:58:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:50.934 22:58:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:50.934 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:50.934 22:58:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:50.934 22:58:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:50.934 22:58:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.934 22:58:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.934 22:58:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:50.934 22:58:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:50.934 22:58:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:50.934 22:58:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:50.934 22:58:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:50.934 22:58:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.934 22:58:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:50.934 22:58:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.934 22:58:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:50.934 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:50.934 22:58:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.935 22:58:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:50.935 22:58:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.935 22:58:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:50.935 22:58:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.935 22:58:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:50.935 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:50.935 22:58:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.935 22:58:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:50.935 22:58:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:50.935 22:58:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:50.935 22:58:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:50.935 22:58:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:50.935 22:58:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.935 22:58:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.935 22:58:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:50.935 22:58:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:50.935 22:58:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:50.935 22:58:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:50.935 22:58:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:50.935 22:58:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:50.935 22:58:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.935 22:58:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:50.935 22:58:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:50.935 22:58:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:50.935 22:58:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:50.935 22:58:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:50.935 22:58:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:50.935 22:58:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:50.935 22:58:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:50.935 22:58:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:50.935 22:58:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:50.935 22:58:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:50.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:15:50.935 00:15:50.935 --- 10.0.0.2 ping statistics --- 00:15:50.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.935 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:15:50.935 22:58:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:50.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.476 ms 00:15:50.935 00:15:50.935 --- 10.0.0.1 ping statistics --- 00:15:50.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.935 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:15:50.935 22:58:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.935 22:58:18 -- nvmf/common.sh@410 -- # return 0 00:15:50.935 22:58:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:50.935 22:58:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.935 22:58:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:50.935 22:58:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:50.935 22:58:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.935 22:58:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:50.935 22:58:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:50.935 22:58:18 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:15:50.935 22:58:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:50.935 22:58:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:50.935 22:58:18 -- common/autotest_common.sh@10 -- # set +x 00:15:50.935 ************************************ 00:15:50.935 START TEST nvmf_host_management 00:15:50.935 ************************************ 00:15:50.935 22:58:18 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:15:50.935 22:58:18 -- target/host_management.sh@69 -- # starttarget 00:15:50.935 22:58:18 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:50.935 22:58:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:50.935 22:58:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:50.935 22:58:18 -- common/autotest_common.sh@10 -- # set +x 00:15:50.935 22:58:18 -- nvmf/common.sh@469 -- # nvmfpid=4046519 00:15:50.935 22:58:18 -- nvmf/common.sh@470 -- # waitforlisten 4046519 00:15:50.935 22:58:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:50.935 22:58:18 -- common/autotest_common.sh@819 -- # '[' -z 4046519 ']' 00:15:50.935 22:58:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.935 22:58:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:50.935 22:58:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.935 22:58:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:50.935 22:58:18 -- common/autotest_common.sh@10 -- # set +x 00:15:50.935 [2024-06-09 22:58:18.103355] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:50.935 [2024-06-09 22:58:18.103436] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.935 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.935 [2024-06-09 22:58:18.174462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:50.935 [2024-06-09 22:58:18.248888] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:50.935 [2024-06-09 22:58:18.249023] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.935 [2024-06-09 22:58:18.249034] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.935 [2024-06-09 22:58:18.249042] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.935 [2024-06-09 22:58:18.249166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.935 [2024-06-09 22:58:18.249326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:50.935 [2024-06-09 22:58:18.249466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.935 [2024-06-09 22:58:18.249466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:50.935 22:58:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:50.935 22:58:18 -- common/autotest_common.sh@852 -- # return 0 00:15:50.935 22:58:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:50.935 22:58:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:50.935 22:58:18 -- common/autotest_common.sh@10 -- # set +x 00:15:50.935 22:58:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.935 22:58:18 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:50.935 22:58:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.935 22:58:18 -- common/autotest_common.sh@10 -- # set +x 00:15:50.935 [2024-06-09 22:58:18.922608] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.935 22:58:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.935 22:58:18 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:50.935 22:58:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:50.935 22:58:18 -- common/autotest_common.sh@10 -- # set +x 00:15:50.935 22:58:18 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:50.935 22:58:18 -- target/host_management.sh@23 -- # cat 00:15:50.935 22:58:18 -- target/host_management.sh@30 -- # rpc_cmd 00:15:50.935 22:58:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.935 22:58:18 -- common/autotest_common.sh@10 -- # set +x 00:15:50.935 Malloc0 00:15:50.935 [2024-06-09 22:58:18.981926] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.935 22:58:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.935 22:58:18 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:50.935 22:58:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:50.935 22:58:18 -- common/autotest_common.sh@10 -- # set +x 00:15:50.935 22:58:19 -- target/host_management.sh@73 -- # perfpid=4046894 00:15:50.935 22:58:19 -- target/host_management.sh@74 -- # waitforlisten 4046894 /var/tmp/bdevperf.sock 00:15:50.935 22:58:19 -- common/autotest_common.sh@819 -- # '[' -z 4046894 ']' 00:15:50.935 22:58:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:50.935 22:58:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:50.935 22:58:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:50.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:50.935 22:58:19 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:50.935 22:58:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:50.935 22:58:19 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:50.935 22:58:19 -- common/autotest_common.sh@10 -- # set +x 00:15:50.935 22:58:19 -- nvmf/common.sh@520 -- # config=() 00:15:50.935 22:58:19 -- nvmf/common.sh@520 -- # local subsystem config 00:15:50.936 22:58:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:50.936 22:58:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:50.936 { 00:15:50.936 "params": { 00:15:50.936 "name": "Nvme$subsystem", 00:15:50.936 "trtype": "$TEST_TRANSPORT", 00:15:50.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.936 "adrfam": "ipv4", 00:15:50.936 "trsvcid": "$NVMF_PORT", 00:15:50.936 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.936 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.936 "hdgst": ${hdgst:-false}, 00:15:50.936 "ddgst": ${ddgst:-false} 00:15:50.936 }, 00:15:50.936 "method": "bdev_nvme_attach_controller" 00:15:50.936 } 00:15:50.936 EOF 00:15:50.936 )") 00:15:50.936 22:58:19 -- nvmf/common.sh@542 -- # cat 00:15:50.936 22:58:19 -- nvmf/common.sh@544 -- # jq . 00:15:50.936 22:58:19 -- nvmf/common.sh@545 -- # IFS=, 00:15:50.936 22:58:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:50.936 "params": { 00:15:50.936 "name": "Nvme0", 00:15:50.936 "trtype": "tcp", 00:15:50.936 "traddr": "10.0.0.2", 00:15:50.936 "adrfam": "ipv4", 00:15:50.936 "trsvcid": "4420", 00:15:50.936 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:50.936 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:50.936 "hdgst": false, 00:15:50.936 "ddgst": false 00:15:50.936 }, 00:15:50.936 "method": "bdev_nvme_attach_controller" 00:15:50.936 }' 00:15:50.936 [2024-06-09 22:58:19.084109] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:50.936 [2024-06-09 22:58:19.084180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4046894 ] 00:15:50.936 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.197 [2024-06-09 22:58:19.143601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.197 [2024-06-09 22:58:19.207162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.458 Running I/O for 10 seconds... 00:15:51.720 22:58:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:51.720 22:58:19 -- common/autotest_common.sh@852 -- # return 0 00:15:51.720 22:58:19 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:51.720 22:58:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.720 22:58:19 -- common/autotest_common.sh@10 -- # set +x 00:15:51.720 22:58:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.720 22:58:19 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:51.720 22:58:19 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:51.720 22:58:19 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:51.720 22:58:19 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:51.720 22:58:19 -- target/host_management.sh@52 -- # local ret=1 00:15:51.720 22:58:19 -- target/host_management.sh@53 -- # local i 00:15:51.720 22:58:19 -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:51.720 22:58:19 -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:51.720 22:58:19 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:51.720 22:58:19 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:51.720 22:58:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.720 22:58:19 -- common/autotest_common.sh@10 -- # set +x 00:15:51.720 22:58:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.984 22:58:19 -- target/host_management.sh@55 -- # read_io_count=843 00:15:51.984 22:58:19 -- target/host_management.sh@58 -- # '[' 843 -ge 100 ']' 00:15:51.984 22:58:19 -- target/host_management.sh@59 -- # ret=0 00:15:51.984 22:58:19 -- target/host_management.sh@60 -- # break 00:15:51.984 22:58:19 -- target/host_management.sh@64 -- # return 0 00:15:51.984 22:58:19 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:51.984 22:58:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.984 22:58:19 -- common/autotest_common.sh@10 -- # set +x 00:15:51.984 [2024-06-09 22:58:19.921074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.984 [2024-06-09 22:58:19.921118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.984 [2024-06-09 22:58:19.921127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.984 [2024-06-09 22:58:19.921134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.984 [2024-06-09 22:58:19.921140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.984 [2024-06-09 22:58:19.921147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.984 [2024-06-09 22:58:19.921154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.984 [2024-06-09 22:58:19.921160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.984 [2024-06-09 22:58:19.921167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.984 [2024-06-09 22:58:19.921173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.984 [2024-06-09 22:58:19.921180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.984 [2024-06-09 22:58:19.921186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.984 [2024-06-09 22:58:19.921193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.984 [2024-06-09 22:58:19.921199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.984 [2024-06-09 22:58:19.921205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 [2024-06-09 22:58:19.921531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1944530 is same with the state(5) to be set 00:15:51.985 22:58:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.985 [2024-06-09 22:58:19.926179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.985 [2024-06-09 22:58:19.926215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.985 [2024-06-09 22:58:19.926232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.985 [2024-06-09 22:58:19.926240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.985 [2024-06-09 22:58:19.926250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.985 [2024-06-09 22:58:19.926257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.985 [2024-06-09 22:58:19.926266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.985 [2024-06-09 22:58:19.926273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.985 [2024-06-09 22:58:19.926283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.985 [2024-06-09 22:58:19.926291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.985 [2024-06-09 22:58:19.926300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.985 [2024-06-09 22:58:19.926307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.985 [2024-06-09 22:58:19.926316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.985 [2024-06-09 22:58:19.926323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.985 [2024-06-09 22:58:19.926333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.985 [2024-06-09 22:58:19.926340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.985 [2024-06-09 22:58:19.926349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.985 [2024-06-09 22:58:19.926361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.985 [2024-06-09 22:58:19.926371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.985 [2024-06-09 22:58:19.926379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.985 [2024-06-09 22:58:19.926388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.985 [2024-06-09 22:58:19.926395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.985 [2024-06-09 22:58:19.926410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.985 [2024-06-09 22:58:19.926417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.985 [2024-06-09 22:58:19.926426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.985 [2024-06-09 22:58:19.926433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.985 [2024-06-09 22:58:19.926442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 22:58:19 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:51.986 [2024-06-09 22:58:19.926458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 22:58:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.986 [2024-06-09 22:58:19.926747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.926968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 22:58:19 -- common/autotest_common.sh@10 -- # set +x 00:15:51.986 [2024-06-09 22:58:19.926984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.926998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.927007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.927014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.927023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.927031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.986 [2024-06-09 22:58:19.927040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.986 [2024-06-09 22:58:19.927048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.987 [2024-06-09 22:58:19.927064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.987 [2024-06-09 22:58:19.927080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.987 [2024-06-09 22:58:19.927096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.987 [2024-06-09 22:58:19.927113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.987 [2024-06-09 22:58:19.927129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.987 [2024-06-09 22:58:19.927145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.987 [2024-06-09 22:58:19.927163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.987 [2024-06-09 22:58:19.927180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.987 [2024-06-09 22:58:19.927197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.987 [2024-06-09 22:58:19.927214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.987 [2024-06-09 22:58:19.927230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.987 [2024-06-09 22:58:19.927246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.987 [2024-06-09 22:58:19.927263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.987 [2024-06-09 22:58:19.927279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:51.987 [2024-06-09 22:58:19.927295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.987 [2024-06-09 22:58:19.927346] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15cf370 was disconnected and freed. reset controller. 00:15:51.987 [2024-06-09 22:58:19.928541] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:51.987 task offset: 122880 on job bdev=Nvme0n1 fails 00:15:51.987 00:15:51.987 Latency(us) 00:15:51.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.987 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:51.987 Job: Nvme0n1 ended in about 0.46 seconds with error 00:15:51.987 Verification LBA range: start 0x0 length 0x400 00:15:51.987 Nvme0n1 : 0.46 2052.94 128.31 139.33 0.00 28761.75 1542.83 48496.64 00:15:51.987 =================================================================================================================== 00:15:51.987 Total : 2052.94 128.31 139.33 0.00 28761.75 1542.83 48496.64 00:15:51.987 [2024-06-09 22:58:19.930526] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:51.987 [2024-06-09 22:58:19.930549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d19c0 (9): Bad file descriptor 00:15:51.987 22:58:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.987 22:58:19 -- target/host_management.sh@87 -- # sleep 1 00:15:51.987 [2024-06-09 22:58:19.940279] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:52.933 22:58:20 -- target/host_management.sh@91 -- # kill -9 4046894 00:15:52.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4046894) - No such process 00:15:52.933 22:58:20 -- target/host_management.sh@91 -- # true 00:15:52.933 22:58:20 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:52.933 22:58:20 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:52.933 22:58:20 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:52.933 22:58:20 -- nvmf/common.sh@520 -- # config=() 00:15:52.933 22:58:20 -- nvmf/common.sh@520 -- # local subsystem config 00:15:52.933 22:58:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:52.933 22:58:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:52.933 { 00:15:52.933 "params": { 00:15:52.933 "name": "Nvme$subsystem", 00:15:52.933 "trtype": "$TEST_TRANSPORT", 00:15:52.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:52.933 "adrfam": "ipv4", 00:15:52.933 "trsvcid": "$NVMF_PORT", 00:15:52.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:52.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:52.933 "hdgst": ${hdgst:-false}, 00:15:52.933 "ddgst": ${ddgst:-false} 00:15:52.933 }, 00:15:52.933 "method": "bdev_nvme_attach_controller" 00:15:52.933 } 00:15:52.933 EOF 00:15:52.933 )") 00:15:52.933 22:58:20 -- nvmf/common.sh@542 -- # cat 00:15:52.933 22:58:20 -- nvmf/common.sh@544 -- # jq . 00:15:52.933 22:58:20 -- nvmf/common.sh@545 -- # IFS=, 00:15:52.933 22:58:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:52.933 "params": { 00:15:52.933 "name": "Nvme0", 00:15:52.933 "trtype": "tcp", 00:15:52.933 "traddr": "10.0.0.2", 00:15:52.933 "adrfam": "ipv4", 00:15:52.933 "trsvcid": "4420", 00:15:52.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:52.933 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:52.933 "hdgst": false, 00:15:52.933 "ddgst": false 00:15:52.933 }, 00:15:52.933 "method": "bdev_nvme_attach_controller" 00:15:52.934 }' 00:15:52.934 [2024-06-09 22:58:20.990474] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:52.934 [2024-06-09 22:58:20.990531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4047251 ] 00:15:52.934 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.934 [2024-06-09 22:58:21.049181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.934 [2024-06-09 22:58:21.110947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.195 Running I/O for 1 seconds... 00:15:54.140 00:15:54.140 Latency(us) 00:15:54.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.140 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:54.140 Verification LBA range: start 0x0 length 0x400 00:15:54.141 Nvme0n1 : 1.02 2246.84 140.43 0.00 0.00 28098.55 5242.88 42379.95 00:15:54.141 =================================================================================================================== 00:15:54.141 Total : 2246.84 140.43 0.00 0.00 28098.55 5242.88 42379.95 00:15:54.410 22:58:22 -- target/host_management.sh@101 -- # stoptarget 00:15:54.410 22:58:22 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:54.410 22:58:22 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:54.410 22:58:22 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:54.410 22:58:22 -- target/host_management.sh@40 -- # nvmftestfini 00:15:54.410 22:58:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:54.410 22:58:22 -- nvmf/common.sh@116 -- # sync 00:15:54.410 22:58:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:54.410 22:58:22 -- nvmf/common.sh@119 -- # set +e 00:15:54.410 22:58:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:54.410 22:58:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:54.410 rmmod nvme_tcp 00:15:54.410 rmmod nvme_fabrics 00:15:54.410 rmmod nvme_keyring 00:15:54.410 22:58:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:54.410 22:58:22 -- nvmf/common.sh@123 -- # set -e 00:15:54.410 22:58:22 -- nvmf/common.sh@124 -- # return 0 00:15:54.410 22:58:22 -- nvmf/common.sh@477 -- # '[' -n 4046519 ']' 00:15:54.410 22:58:22 -- nvmf/common.sh@478 -- # killprocess 4046519 00:15:54.410 22:58:22 -- common/autotest_common.sh@926 -- # '[' -z 4046519 ']' 00:15:54.410 22:58:22 -- common/autotest_common.sh@930 -- # kill -0 4046519 00:15:54.410 22:58:22 -- common/autotest_common.sh@931 -- # uname 00:15:54.410 22:58:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:54.410 22:58:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4046519 00:15:54.410 22:58:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:54.410 22:58:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:54.410 22:58:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4046519' 00:15:54.410 killing process with pid 4046519 00:15:54.410 22:58:22 -- common/autotest_common.sh@945 -- # kill 4046519 00:15:54.410 22:58:22 -- common/autotest_common.sh@950 -- # wait 4046519 00:15:54.671 [2024-06-09 22:58:22.693944] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:54.671 22:58:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:54.671 22:58:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:54.671 22:58:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:54.671 22:58:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:54.671 22:58:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:54.671 22:58:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.671 22:58:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.671 22:58:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.218 22:58:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:57.218 00:15:57.218 real 0m6.748s 00:15:57.218 user 0m20.219s 00:15:57.218 sys 0m1.018s 00:15:57.218 22:58:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.218 22:58:24 -- common/autotest_common.sh@10 -- # set +x 00:15:57.218 ************************************ 00:15:57.218 END TEST nvmf_host_management 00:15:57.218 ************************************ 00:15:57.218 22:58:24 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:15:57.218 00:15:57.218 real 0m14.122s 00:15:57.218 user 0m22.263s 00:15:57.218 sys 0m6.292s 00:15:57.218 22:58:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.218 22:58:24 -- common/autotest_common.sh@10 -- # set +x 00:15:57.218 ************************************ 00:15:57.218 END TEST nvmf_host_management 00:15:57.218 ************************************ 00:15:57.219 22:58:24 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:57.219 22:58:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:57.219 22:58:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:57.219 22:58:24 -- common/autotest_common.sh@10 -- # set +x 00:15:57.219 ************************************ 00:15:57.219 START TEST nvmf_lvol 00:15:57.219 ************************************ 00:15:57.219 22:58:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:57.219 * Looking for test storage... 00:15:57.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:57.219 22:58:24 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.219 22:58:24 -- nvmf/common.sh@7 -- # uname -s 00:15:57.219 22:58:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.219 22:58:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.219 22:58:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.219 22:58:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.219 22:58:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.219 22:58:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.219 22:58:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.219 22:58:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.219 22:58:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.219 22:58:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.219 22:58:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:57.219 22:58:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:57.219 22:58:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.219 22:58:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.219 22:58:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.219 22:58:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:57.219 22:58:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.219 22:58:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.219 22:58:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.219 22:58:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.219 22:58:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.219 22:58:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.219 22:58:24 -- paths/export.sh@5 -- # export PATH 00:15:57.219 22:58:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.219 22:58:24 -- nvmf/common.sh@46 -- # : 0 00:15:57.219 22:58:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:57.219 22:58:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:57.219 22:58:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:57.219 22:58:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.219 22:58:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.219 22:58:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:57.219 22:58:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:57.219 22:58:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:57.219 22:58:24 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.219 22:58:24 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:57.219 22:58:24 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:57.219 22:58:24 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:57.219 22:58:24 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.219 22:58:24 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:57.219 22:58:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:57.219 22:58:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.219 22:58:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:57.219 22:58:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:57.219 22:58:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:57.219 22:58:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.219 22:58:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.219 22:58:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.219 22:58:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:57.219 22:58:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:57.219 22:58:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:57.219 22:58:25 -- common/autotest_common.sh@10 -- # set +x 00:16:03.850 22:58:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:03.850 22:58:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:03.850 22:58:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:03.850 22:58:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:03.850 22:58:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:03.850 22:58:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:03.850 22:58:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:03.850 22:58:31 -- nvmf/common.sh@294 -- # net_devs=() 00:16:03.850 22:58:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:03.850 22:58:31 -- nvmf/common.sh@295 -- # e810=() 00:16:03.850 22:58:31 -- nvmf/common.sh@295 -- # local -ga e810 00:16:03.850 22:58:31 -- nvmf/common.sh@296 -- # x722=() 00:16:03.850 22:58:31 -- nvmf/common.sh@296 -- # local -ga x722 00:16:03.850 22:58:31 -- nvmf/common.sh@297 -- # mlx=() 00:16:03.850 22:58:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:03.850 22:58:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.850 22:58:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.850 22:58:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.850 22:58:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.850 22:58:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.850 22:58:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.850 22:58:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.850 22:58:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.850 22:58:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.850 22:58:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.850 22:58:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.850 22:58:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:03.850 22:58:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:03.850 22:58:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:03.850 22:58:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:03.850 22:58:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:03.850 22:58:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:03.850 22:58:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:03.850 22:58:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:03.850 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:03.850 22:58:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:03.850 22:58:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:03.850 22:58:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.850 22:58:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.850 22:58:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:03.850 22:58:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:03.850 22:58:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:03.850 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:03.850 22:58:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:03.850 22:58:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:03.850 22:58:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.850 22:58:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.850 22:58:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:03.850 22:58:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:03.850 22:58:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:03.850 22:58:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:03.850 22:58:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:03.850 22:58:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.850 22:58:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:03.850 22:58:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.850 22:58:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:03.850 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:03.850 22:58:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.850 22:58:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:03.850 22:58:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.851 22:58:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:03.851 22:58:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.851 22:58:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:03.851 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:03.851 22:58:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.851 22:58:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:03.851 22:58:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:03.851 22:58:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:03.851 22:58:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:03.851 22:58:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:03.851 22:58:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.851 22:58:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.851 22:58:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:03.851 22:58:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:03.851 22:58:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:03.851 22:58:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:03.851 22:58:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:03.851 22:58:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:03.851 22:58:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.851 22:58:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:03.851 22:58:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:03.851 22:58:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:03.851 22:58:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:03.851 22:58:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:03.851 22:58:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:03.851 22:58:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:03.851 22:58:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:03.851 22:58:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:03.851 22:58:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:03.851 22:58:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:03.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:16:03.851 00:16:03.851 --- 10.0.0.2 ping statistics --- 00:16:03.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.851 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:16:03.851 22:58:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:03.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:16:03.851 00:16:03.851 --- 10.0.0.1 ping statistics --- 00:16:03.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.851 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:16:03.851 22:58:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.851 22:58:31 -- nvmf/common.sh@410 -- # return 0 00:16:03.851 22:58:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:03.851 22:58:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.851 22:58:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:03.851 22:58:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:03.851 22:58:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.851 22:58:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:03.851 22:58:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:03.851 22:58:31 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:03.851 22:58:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:03.851 22:58:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:03.851 22:58:31 -- common/autotest_common.sh@10 -- # set +x 00:16:03.851 22:58:31 -- nvmf/common.sh@469 -- # nvmfpid=4051619 00:16:03.851 22:58:31 -- nvmf/common.sh@470 -- # waitforlisten 4051619 00:16:03.851 22:58:31 -- common/autotest_common.sh@819 -- # '[' -z 4051619 ']' 00:16:03.851 22:58:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.851 22:58:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:03.851 22:58:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.851 22:58:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:03.851 22:58:31 -- common/autotest_common.sh@10 -- # set +x 00:16:03.851 22:58:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:03.851 [2024-06-09 22:58:31.599993] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:03.851 [2024-06-09 22:58:31.600051] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.851 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.851 [2024-06-09 22:58:31.669810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:03.851 [2024-06-09 22:58:31.742790] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:03.851 [2024-06-09 22:58:31.742911] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.851 [2024-06-09 22:58:31.742920] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.851 [2024-06-09 22:58:31.742927] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.851 [2024-06-09 22:58:31.743034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.851 [2024-06-09 22:58:31.743150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.851 [2024-06-09 22:58:31.743153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.439 22:58:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:04.439 22:58:32 -- common/autotest_common.sh@852 -- # return 0 00:16:04.439 22:58:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:04.439 22:58:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:04.439 22:58:32 -- common/autotest_common.sh@10 -- # set +x 00:16:04.439 22:58:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.439 22:58:32 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:04.439 [2024-06-09 22:58:32.547537] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.439 22:58:32 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.700 22:58:32 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:04.700 22:58:32 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.961 22:58:32 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:04.961 22:58:32 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:04.961 22:58:33 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:05.221 22:58:33 -- target/nvmf_lvol.sh@29 -- # lvs=27afc92a-3d0e-4e46-a5a0-5cfcce222de7 00:16:05.221 22:58:33 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 27afc92a-3d0e-4e46-a5a0-5cfcce222de7 lvol 20 00:16:05.481 22:58:33 -- target/nvmf_lvol.sh@32 -- # lvol=a91eb543-aec5-4a53-b14e-04f721875424 00:16:05.481 22:58:33 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:05.481 22:58:33 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a91eb543-aec5-4a53-b14e-04f721875424 00:16:05.741 22:58:33 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:06.002 [2024-06-09 22:58:33.926786] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.002 22:58:33 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:06.002 22:58:34 -- target/nvmf_lvol.sh@42 -- # perf_pid=4052164 00:16:06.002 22:58:34 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:06.002 22:58:34 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:06.002 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.945 22:58:35 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a91eb543-aec5-4a53-b14e-04f721875424 MY_SNAPSHOT 00:16:07.205 22:58:35 -- target/nvmf_lvol.sh@47 -- # snapshot=8c50bd92-b5b7-4843-8c35-f3f4573e0337 00:16:07.205 22:58:35 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a91eb543-aec5-4a53-b14e-04f721875424 30 00:16:07.466 22:58:35 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8c50bd92-b5b7-4843-8c35-f3f4573e0337 MY_CLONE 00:16:07.727 22:58:35 -- target/nvmf_lvol.sh@49 -- # clone=f0424d95-de46-4dc0-b458-a0eb1d0adef6 00:16:07.727 22:58:35 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f0424d95-de46-4dc0-b458-a0eb1d0adef6 00:16:07.988 22:58:35 -- target/nvmf_lvol.sh@53 -- # wait 4052164 00:16:17.993 Initializing NVMe Controllers 00:16:17.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:17.993 Controller IO queue size 128, less than required. 00:16:17.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:17.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:17.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:17.993 Initialization complete. Launching workers. 00:16:17.993 ======================================================== 00:16:17.993 Latency(us) 00:16:17.993 Device Information : IOPS MiB/s Average min max 00:16:17.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12384.00 48.38 10339.30 1600.10 48910.75 00:16:17.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12523.90 48.92 10224.83 3128.12 59195.38 00:16:17.993 ======================================================== 00:16:17.993 Total : 24907.90 97.30 10281.74 1600.10 59195.38 00:16:17.993 00:16:17.994 22:58:44 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:17.994 22:58:44 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a91eb543-aec5-4a53-b14e-04f721875424 00:16:17.994 22:58:44 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 27afc92a-3d0e-4e46-a5a0-5cfcce222de7 00:16:17.994 22:58:44 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:17.994 22:58:44 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:17.994 22:58:44 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:17.994 22:58:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:17.994 22:58:44 -- nvmf/common.sh@116 -- # sync 00:16:17.994 22:58:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:17.994 22:58:44 -- nvmf/common.sh@119 -- # set +e 00:16:17.994 22:58:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:17.994 22:58:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:17.994 rmmod nvme_tcp 00:16:17.994 rmmod nvme_fabrics 00:16:17.994 rmmod nvme_keyring 00:16:17.994 22:58:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:17.994 22:58:44 -- nvmf/common.sh@123 -- # set -e 00:16:17.994 22:58:44 -- nvmf/common.sh@124 -- # return 0 00:16:17.994 22:58:44 -- nvmf/common.sh@477 -- # '[' -n 4051619 ']' 00:16:17.994 22:58:44 -- nvmf/common.sh@478 -- # killprocess 4051619 00:16:17.994 22:58:44 -- common/autotest_common.sh@926 -- # '[' -z 4051619 ']' 00:16:17.994 22:58:44 -- common/autotest_common.sh@930 -- # kill -0 4051619 00:16:17.994 22:58:44 -- common/autotest_common.sh@931 -- # uname 00:16:17.994 22:58:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:17.994 22:58:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4051619 00:16:17.994 22:58:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:17.994 22:58:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:17.994 22:58:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4051619' 00:16:17.994 killing process with pid 4051619 00:16:17.994 22:58:44 -- common/autotest_common.sh@945 -- # kill 4051619 00:16:17.994 22:58:44 -- common/autotest_common.sh@950 -- # wait 4051619 00:16:17.994 22:58:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:17.994 22:58:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:17.994 22:58:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:17.994 22:58:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:17.994 22:58:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:17.994 22:58:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.994 22:58:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.994 22:58:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.382 22:58:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:19.382 00:16:19.382 real 0m22.315s 00:16:19.382 user 1m2.922s 00:16:19.382 sys 0m7.165s 00:16:19.382 22:58:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:19.382 22:58:47 -- common/autotest_common.sh@10 -- # set +x 00:16:19.382 ************************************ 00:16:19.382 END TEST nvmf_lvol 00:16:19.382 ************************************ 00:16:19.382 22:58:47 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:19.382 22:58:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:19.382 22:58:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:19.382 22:58:47 -- common/autotest_common.sh@10 -- # set +x 00:16:19.382 ************************************ 00:16:19.382 START TEST nvmf_lvs_grow 00:16:19.382 ************************************ 00:16:19.382 22:58:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:19.382 * Looking for test storage... 00:16:19.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:19.382 22:58:47 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.382 22:58:47 -- nvmf/common.sh@7 -- # uname -s 00:16:19.382 22:58:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.382 22:58:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.382 22:58:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.382 22:58:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.382 22:58:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.382 22:58:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.382 22:58:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.382 22:58:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.382 22:58:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.382 22:58:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.382 22:58:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:19.382 22:58:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:19.382 22:58:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.382 22:58:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.382 22:58:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.382 22:58:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.382 22:58:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.382 22:58:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.382 22:58:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.382 22:58:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.382 22:58:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.382 22:58:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.382 22:58:47 -- paths/export.sh@5 -- # export PATH 00:16:19.383 22:58:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.383 22:58:47 -- nvmf/common.sh@46 -- # : 0 00:16:19.383 22:58:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:19.383 22:58:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:19.383 22:58:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:19.383 22:58:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.383 22:58:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.383 22:58:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:19.383 22:58:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:19.383 22:58:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:19.383 22:58:47 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:19.383 22:58:47 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:19.383 22:58:47 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:16:19.383 22:58:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:19.383 22:58:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.383 22:58:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:19.383 22:58:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:19.383 22:58:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:19.383 22:58:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.383 22:58:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.383 22:58:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.383 22:58:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:19.383 22:58:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:19.383 22:58:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:19.383 22:58:47 -- common/autotest_common.sh@10 -- # set +x 00:16:27.535 22:58:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:27.535 22:58:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:27.535 22:58:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:27.535 22:58:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:27.535 22:58:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:27.535 22:58:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:27.535 22:58:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:27.535 22:58:54 -- nvmf/common.sh@294 -- # net_devs=() 00:16:27.535 22:58:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:27.535 22:58:54 -- nvmf/common.sh@295 -- # e810=() 00:16:27.535 22:58:54 -- nvmf/common.sh@295 -- # local -ga e810 00:16:27.535 22:58:54 -- nvmf/common.sh@296 -- # x722=() 00:16:27.535 22:58:54 -- nvmf/common.sh@296 -- # local -ga x722 00:16:27.535 22:58:54 -- nvmf/common.sh@297 -- # mlx=() 00:16:27.535 22:58:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:27.535 22:58:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:27.535 22:58:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:27.535 22:58:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:27.535 22:58:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:27.535 22:58:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:27.535 22:58:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:27.535 22:58:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:27.535 22:58:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:27.535 22:58:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:27.535 22:58:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:27.535 22:58:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:27.535 22:58:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:27.535 22:58:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:27.535 22:58:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:27.535 22:58:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:27.535 22:58:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:27.535 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:27.535 22:58:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:27.535 22:58:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:27.535 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:27.535 22:58:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:27.535 22:58:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:27.535 22:58:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.535 22:58:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:27.535 22:58:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.535 22:58:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:27.535 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:27.535 22:58:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.535 22:58:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:27.535 22:58:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.535 22:58:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:27.535 22:58:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.535 22:58:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:27.535 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:27.535 22:58:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.535 22:58:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:27.535 22:58:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:27.535 22:58:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:27.535 22:58:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.535 22:58:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.535 22:58:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:27.535 22:58:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:27.535 22:58:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:27.535 22:58:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:27.535 22:58:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:27.535 22:58:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:27.535 22:58:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.535 22:58:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:27.535 22:58:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:27.535 22:58:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:27.535 22:58:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:27.535 22:58:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:27.535 22:58:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:27.535 22:58:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:27.535 22:58:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:27.535 22:58:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:27.535 22:58:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:27.535 22:58:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:27.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:16:27.535 00:16:27.535 --- 10.0.0.2 ping statistics --- 00:16:27.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.535 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:16:27.535 22:58:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:27.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.430 ms 00:16:27.535 00:16:27.535 --- 10.0.0.1 ping statistics --- 00:16:27.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.535 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:16:27.535 22:58:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.535 22:58:54 -- nvmf/common.sh@410 -- # return 0 00:16:27.535 22:58:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:27.535 22:58:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.535 22:58:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:27.535 22:58:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.535 22:58:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:27.535 22:58:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:27.535 22:58:54 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:16:27.535 22:58:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:27.535 22:58:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:27.535 22:58:54 -- common/autotest_common.sh@10 -- # set +x 00:16:27.535 22:58:54 -- nvmf/common.sh@469 -- # nvmfpid=4058401 00:16:27.535 22:58:54 -- nvmf/common.sh@470 -- # waitforlisten 4058401 00:16:27.535 22:58:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:27.535 22:58:54 -- common/autotest_common.sh@819 -- # '[' -z 4058401 ']' 00:16:27.535 22:58:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.535 22:58:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:27.535 22:58:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.535 22:58:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:27.535 22:58:54 -- common/autotest_common.sh@10 -- # set +x 00:16:27.535 [2024-06-09 22:58:54.653246] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:27.535 [2024-06-09 22:58:54.653310] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.535 EAL: No free 2048 kB hugepages reported on node 1 00:16:27.535 [2024-06-09 22:58:54.722555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.535 [2024-06-09 22:58:54.794984] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:27.535 [2024-06-09 22:58:54.795102] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.535 [2024-06-09 22:58:54.795111] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.535 [2024-06-09 22:58:54.795117] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.535 [2024-06-09 22:58:54.795136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.535 22:58:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:27.535 22:58:55 -- common/autotest_common.sh@852 -- # return 0 00:16:27.535 22:58:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:27.535 22:58:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:27.535 22:58:55 -- common/autotest_common.sh@10 -- # set +x 00:16:27.535 22:58:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.535 22:58:55 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:27.535 [2024-06-09 22:58:55.594261] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.535 22:58:55 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:16:27.535 22:58:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:27.536 22:58:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:27.536 22:58:55 -- common/autotest_common.sh@10 -- # set +x 00:16:27.536 ************************************ 00:16:27.536 START TEST lvs_grow_clean 00:16:27.536 ************************************ 00:16:27.536 22:58:55 -- common/autotest_common.sh@1104 -- # lvs_grow 00:16:27.536 22:58:55 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:27.536 22:58:55 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:27.536 22:58:55 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:27.536 22:58:55 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:27.536 22:58:55 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:27.536 22:58:55 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:27.536 22:58:55 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:27.536 22:58:55 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:27.536 22:58:55 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:27.796 22:58:55 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:27.796 22:58:55 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:27.796 22:58:55 -- target/nvmf_lvs_grow.sh@28 -- # lvs=3dad63e3-503d-4358-9517-c8c9359f473e 00:16:27.796 22:58:55 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3dad63e3-503d-4358-9517-c8c9359f473e 00:16:27.796 22:58:55 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:28.057 22:58:56 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:28.057 22:58:56 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:28.057 22:58:56 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3dad63e3-503d-4358-9517-c8c9359f473e lvol 150 00:16:28.318 22:58:56 -- target/nvmf_lvs_grow.sh@33 -- # lvol=c87cf044-4739-42c7-bf61-27637bfdbd29 00:16:28.318 22:58:56 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:28.318 22:58:56 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:28.318 [2024-06-09 22:58:56.397439] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:28.318 [2024-06-09 22:58:56.397492] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:28.318 true 00:16:28.318 22:58:56 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3dad63e3-503d-4358-9517-c8c9359f473e 00:16:28.318 22:58:56 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:28.618 22:58:56 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:28.618 22:58:56 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:28.618 22:58:56 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c87cf044-4739-42c7-bf61-27637bfdbd29 00:16:28.898 22:58:56 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:28.898 [2024-06-09 22:58:56.999333] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.898 22:58:57 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:29.158 22:58:57 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4059092 00:16:29.158 22:58:57 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:29.158 22:58:57 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4059092 /var/tmp/bdevperf.sock 00:16:29.158 22:58:57 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:29.158 22:58:57 -- common/autotest_common.sh@819 -- # '[' -z 4059092 ']' 00:16:29.158 22:58:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:29.158 22:58:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:29.158 22:58:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:29.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:29.158 22:58:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:29.158 22:58:57 -- common/autotest_common.sh@10 -- # set +x 00:16:29.158 [2024-06-09 22:58:57.220565] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:29.158 [2024-06-09 22:58:57.220616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4059092 ] 00:16:29.158 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.158 [2024-06-09 22:58:57.277782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.419 [2024-06-09 22:58:57.339387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.988 22:58:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:29.988 22:58:57 -- common/autotest_common.sh@852 -- # return 0 00:16:29.988 22:58:57 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:30.249 Nvme0n1 00:16:30.249 22:58:58 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:30.249 [ 00:16:30.249 { 00:16:30.249 "name": "Nvme0n1", 00:16:30.249 "aliases": [ 00:16:30.249 "c87cf044-4739-42c7-bf61-27637bfdbd29" 00:16:30.249 ], 00:16:30.249 "product_name": "NVMe disk", 00:16:30.249 "block_size": 4096, 00:16:30.249 "num_blocks": 38912, 00:16:30.249 "uuid": "c87cf044-4739-42c7-bf61-27637bfdbd29", 00:16:30.249 "assigned_rate_limits": { 00:16:30.249 "rw_ios_per_sec": 0, 00:16:30.249 "rw_mbytes_per_sec": 0, 00:16:30.249 "r_mbytes_per_sec": 0, 00:16:30.249 "w_mbytes_per_sec": 0 00:16:30.249 }, 00:16:30.249 "claimed": false, 00:16:30.249 "zoned": false, 00:16:30.249 "supported_io_types": { 00:16:30.249 "read": true, 00:16:30.249 "write": true, 00:16:30.249 "unmap": true, 00:16:30.249 "write_zeroes": true, 00:16:30.249 "flush": true, 00:16:30.249 "reset": true, 00:16:30.249 "compare": true, 00:16:30.249 "compare_and_write": true, 00:16:30.249 "abort": true, 00:16:30.249 "nvme_admin": true, 00:16:30.249 "nvme_io": true 00:16:30.249 }, 00:16:30.249 "driver_specific": { 00:16:30.249 "nvme": [ 00:16:30.249 { 00:16:30.249 "trid": { 00:16:30.249 "trtype": "TCP", 00:16:30.249 "adrfam": "IPv4", 00:16:30.249 "traddr": "10.0.0.2", 00:16:30.249 "trsvcid": "4420", 00:16:30.249 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:30.249 }, 00:16:30.249 "ctrlr_data": { 00:16:30.249 "cntlid": 1, 00:16:30.249 "vendor_id": "0x8086", 00:16:30.249 "model_number": "SPDK bdev Controller", 00:16:30.249 "serial_number": "SPDK0", 00:16:30.249 "firmware_revision": "24.01.1", 00:16:30.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:30.249 "oacs": { 00:16:30.249 "security": 0, 00:16:30.249 "format": 0, 00:16:30.249 "firmware": 0, 00:16:30.249 "ns_manage": 0 00:16:30.249 }, 00:16:30.249 "multi_ctrlr": true, 00:16:30.249 "ana_reporting": false 00:16:30.249 }, 00:16:30.249 "vs": { 00:16:30.249 "nvme_version": "1.3" 00:16:30.249 }, 00:16:30.249 "ns_data": { 00:16:30.249 "id": 1, 00:16:30.249 "can_share": true 00:16:30.249 } 00:16:30.249 } 00:16:30.249 ], 00:16:30.249 "mp_policy": "active_passive" 00:16:30.249 } 00:16:30.249 } 00:16:30.249 ] 00:16:30.249 22:58:58 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4059289 00:16:30.249 22:58:58 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:30.249 22:58:58 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:30.509 Running I/O for 10 seconds... 00:16:31.452 Latency(us) 00:16:31.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:31.453 Nvme0n1 : 1.00 17937.00 70.07 0.00 0.00 0.00 0.00 0.00 00:16:31.453 =================================================================================================================== 00:16:31.453 Total : 17937.00 70.07 0.00 0.00 0.00 0.00 0.00 00:16:31.453 00:16:32.393 22:59:00 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3dad63e3-503d-4358-9517-c8c9359f473e 00:16:32.393 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:32.393 Nvme0n1 : 2.00 18131.50 70.83 0.00 0.00 0.00 0.00 0.00 00:16:32.393 =================================================================================================================== 00:16:32.393 Total : 18131.50 70.83 0.00 0.00 0.00 0.00 0.00 00:16:32.393 00:16:32.393 true 00:16:32.653 22:59:00 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3dad63e3-503d-4358-9517-c8c9359f473e 00:16:32.653 22:59:00 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:32.653 22:59:00 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:32.653 22:59:00 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:32.653 22:59:00 -- target/nvmf_lvs_grow.sh@65 -- # wait 4059289 00:16:33.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:33.595 Nvme0n1 : 3.00 18194.67 71.07 0.00 0.00 0.00 0.00 0.00 00:16:33.595 =================================================================================================================== 00:16:33.595 Total : 18194.67 71.07 0.00 0.00 0.00 0.00 0.00 00:16:33.595 00:16:34.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:34.538 Nvme0n1 : 4.00 18269.75 71.37 0.00 0.00 0.00 0.00 0.00 00:16:34.538 =================================================================================================================== 00:16:34.538 Total : 18269.75 71.37 0.00 0.00 0.00 0.00 0.00 00:16:34.538 00:16:35.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.478 Nvme0n1 : 5.00 18312.40 71.53 0.00 0.00 0.00 0.00 0.00 00:16:35.478 =================================================================================================================== 00:16:35.478 Total : 18312.40 71.53 0.00 0.00 0.00 0.00 0.00 00:16:35.478 00:16:36.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.419 Nvme0n1 : 6.00 18345.17 71.66 0.00 0.00 0.00 0.00 0.00 00:16:36.419 =================================================================================================================== 00:16:36.419 Total : 18345.17 71.66 0.00 0.00 0.00 0.00 0.00 00:16:36.419 00:16:37.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.360 Nvme0n1 : 7.00 18366.71 71.74 0.00 0.00 0.00 0.00 0.00 00:16:37.360 =================================================================================================================== 00:16:37.360 Total : 18366.71 71.74 0.00 0.00 0.00 0.00 0.00 00:16:37.360 00:16:38.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.748 Nvme0n1 : 8.00 18390.75 71.84 0.00 0.00 0.00 0.00 0.00 00:16:38.748 =================================================================================================================== 00:16:38.748 Total : 18390.75 71.84 0.00 0.00 0.00 0.00 0.00 00:16:38.748 00:16:39.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.692 Nvme0n1 : 9.00 18402.56 71.88 0.00 0.00 0.00 0.00 0.00 00:16:39.692 =================================================================================================================== 00:16:39.692 Total : 18402.56 71.88 0.00 0.00 0.00 0.00 0.00 00:16:39.692 00:16:40.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.636 Nvme0n1 : 10.00 18418.30 71.95 0.00 0.00 0.00 0.00 0.00 00:16:40.636 =================================================================================================================== 00:16:40.636 Total : 18418.30 71.95 0.00 0.00 0.00 0.00 0.00 00:16:40.636 00:16:40.636 00:16:40.636 Latency(us) 00:16:40.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.636 Nvme0n1 : 10.01 18419.27 71.95 0.00 0.00 6945.01 4232.53 21954.56 00:16:40.636 =================================================================================================================== 00:16:40.636 Total : 18419.27 71.95 0.00 0.00 6945.01 4232.53 21954.56 00:16:40.636 0 00:16:40.636 22:59:08 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4059092 00:16:40.636 22:59:08 -- common/autotest_common.sh@926 -- # '[' -z 4059092 ']' 00:16:40.636 22:59:08 -- common/autotest_common.sh@930 -- # kill -0 4059092 00:16:40.636 22:59:08 -- common/autotest_common.sh@931 -- # uname 00:16:40.636 22:59:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:40.636 22:59:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4059092 00:16:40.636 22:59:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:40.636 22:59:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:40.636 22:59:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4059092' 00:16:40.636 killing process with pid 4059092 00:16:40.636 22:59:08 -- common/autotest_common.sh@945 -- # kill 4059092 00:16:40.636 Received shutdown signal, test time was about 10.000000 seconds 00:16:40.636 00:16:40.636 Latency(us) 00:16:40.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.636 =================================================================================================================== 00:16:40.636 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:40.636 22:59:08 -- common/autotest_common.sh@950 -- # wait 4059092 00:16:40.636 22:59:08 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:40.897 22:59:08 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3dad63e3-503d-4358-9517-c8c9359f473e 00:16:40.897 22:59:08 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:40.897 22:59:09 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:40.897 22:59:09 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:16:40.897 22:59:09 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:41.158 [2024-06-09 22:59:09.203900] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:41.158 22:59:09 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3dad63e3-503d-4358-9517-c8c9359f473e 00:16:41.158 22:59:09 -- common/autotest_common.sh@640 -- # local es=0 00:16:41.158 22:59:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3dad63e3-503d-4358-9517-c8c9359f473e 00:16:41.158 22:59:09 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:41.158 22:59:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:41.158 22:59:09 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:41.158 22:59:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:41.158 22:59:09 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:41.158 22:59:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:41.158 22:59:09 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:41.158 22:59:09 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:41.158 22:59:09 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3dad63e3-503d-4358-9517-c8c9359f473e 00:16:41.419 request: 00:16:41.419 { 00:16:41.419 "uuid": "3dad63e3-503d-4358-9517-c8c9359f473e", 00:16:41.419 "method": "bdev_lvol_get_lvstores", 00:16:41.419 "req_id": 1 00:16:41.419 } 00:16:41.419 Got JSON-RPC error response 00:16:41.419 response: 00:16:41.419 { 00:16:41.419 "code": -19, 00:16:41.419 "message": "No such device" 00:16:41.419 } 00:16:41.419 22:59:09 -- common/autotest_common.sh@643 -- # es=1 00:16:41.419 22:59:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:41.419 22:59:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:41.419 22:59:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:41.419 22:59:09 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:41.419 aio_bdev 00:16:41.419 22:59:09 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev c87cf044-4739-42c7-bf61-27637bfdbd29 00:16:41.419 22:59:09 -- common/autotest_common.sh@887 -- # local bdev_name=c87cf044-4739-42c7-bf61-27637bfdbd29 00:16:41.419 22:59:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:41.419 22:59:09 -- common/autotest_common.sh@889 -- # local i 00:16:41.419 22:59:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:41.419 22:59:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:41.419 22:59:09 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:41.681 22:59:09 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c87cf044-4739-42c7-bf61-27637bfdbd29 -t 2000 00:16:41.681 [ 00:16:41.681 { 00:16:41.681 "name": "c87cf044-4739-42c7-bf61-27637bfdbd29", 00:16:41.681 "aliases": [ 00:16:41.681 "lvs/lvol" 00:16:41.681 ], 00:16:41.681 "product_name": "Logical Volume", 00:16:41.681 "block_size": 4096, 00:16:41.681 "num_blocks": 38912, 00:16:41.681 "uuid": "c87cf044-4739-42c7-bf61-27637bfdbd29", 00:16:41.681 "assigned_rate_limits": { 00:16:41.681 "rw_ios_per_sec": 0, 00:16:41.681 "rw_mbytes_per_sec": 0, 00:16:41.681 "r_mbytes_per_sec": 0, 00:16:41.681 "w_mbytes_per_sec": 0 00:16:41.681 }, 00:16:41.681 "claimed": false, 00:16:41.681 "zoned": false, 00:16:41.681 "supported_io_types": { 00:16:41.681 "read": true, 00:16:41.681 "write": true, 00:16:41.681 "unmap": true, 00:16:41.681 "write_zeroes": true, 00:16:41.681 "flush": false, 00:16:41.681 "reset": true, 00:16:41.681 "compare": false, 00:16:41.681 "compare_and_write": false, 00:16:41.681 "abort": false, 00:16:41.681 "nvme_admin": false, 00:16:41.681 "nvme_io": false 00:16:41.681 }, 00:16:41.681 "driver_specific": { 00:16:41.681 "lvol": { 00:16:41.681 "lvol_store_uuid": "3dad63e3-503d-4358-9517-c8c9359f473e", 00:16:41.681 "base_bdev": "aio_bdev", 00:16:41.681 "thin_provision": false, 00:16:41.681 "snapshot": false, 00:16:41.681 "clone": false, 00:16:41.681 "esnap_clone": false 00:16:41.681 } 00:16:41.681 } 00:16:41.681 } 00:16:41.681 ] 00:16:41.681 22:59:09 -- common/autotest_common.sh@895 -- # return 0 00:16:41.681 22:59:09 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3dad63e3-503d-4358-9517-c8c9359f473e 00:16:41.681 22:59:09 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:41.942 22:59:09 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:41.942 22:59:09 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3dad63e3-503d-4358-9517-c8c9359f473e 00:16:41.942 22:59:09 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:42.203 22:59:10 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:42.203 22:59:10 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c87cf044-4739-42c7-bf61-27637bfdbd29 00:16:42.203 22:59:10 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3dad63e3-503d-4358-9517-c8c9359f473e 00:16:42.465 22:59:10 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:42.465 22:59:10 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:42.465 00:16:42.465 real 0m15.021s 00:16:42.465 user 0m14.728s 00:16:42.465 sys 0m1.252s 00:16:42.465 22:59:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:42.465 22:59:10 -- common/autotest_common.sh@10 -- # set +x 00:16:42.465 ************************************ 00:16:42.465 END TEST lvs_grow_clean 00:16:42.465 ************************************ 00:16:42.726 22:59:10 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:42.726 22:59:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:42.726 22:59:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:42.726 22:59:10 -- common/autotest_common.sh@10 -- # set +x 00:16:42.726 ************************************ 00:16:42.726 START TEST lvs_grow_dirty 00:16:42.726 ************************************ 00:16:42.726 22:59:10 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:16:42.726 22:59:10 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:42.726 22:59:10 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:42.726 22:59:10 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:42.726 22:59:10 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:42.726 22:59:10 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:42.726 22:59:10 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:42.726 22:59:10 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:42.726 22:59:10 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:42.726 22:59:10 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:42.726 22:59:10 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:42.726 22:59:10 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:42.987 22:59:11 -- target/nvmf_lvs_grow.sh@28 -- # lvs=8fbb1cd1-633b-4509-b99a-5ecb38373eea 00:16:42.987 22:59:11 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fbb1cd1-633b-4509-b99a-5ecb38373eea 00:16:42.987 22:59:11 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:43.249 22:59:11 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:43.249 22:59:11 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:43.249 22:59:11 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8fbb1cd1-633b-4509-b99a-5ecb38373eea lvol 150 00:16:43.249 22:59:11 -- target/nvmf_lvs_grow.sh@33 -- # lvol=6de6a2fb-06c0-4efb-9811-2a16bc6b615f 00:16:43.249 22:59:11 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:43.249 22:59:11 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:43.510 [2024-06-09 22:59:11.460890] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:43.510 [2024-06-09 22:59:11.460950] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:43.510 true 00:16:43.510 22:59:11 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fbb1cd1-633b-4509-b99a-5ecb38373eea 00:16:43.510 22:59:11 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:43.510 22:59:11 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:43.510 22:59:11 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:43.771 22:59:11 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6de6a2fb-06c0-4efb-9811-2a16bc6b615f 00:16:43.771 22:59:11 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:44.031 22:59:12 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:44.300 22:59:12 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:44.300 22:59:12 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4062058 00:16:44.300 22:59:12 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:44.300 22:59:12 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4062058 /var/tmp/bdevperf.sock 00:16:44.300 22:59:12 -- common/autotest_common.sh@819 -- # '[' -z 4062058 ']' 00:16:44.300 22:59:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:44.300 22:59:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:44.300 22:59:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:44.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:44.300 22:59:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:44.300 22:59:12 -- common/autotest_common.sh@10 -- # set +x 00:16:44.300 [2024-06-09 22:59:12.235291] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:44.300 [2024-06-09 22:59:12.235339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4062058 ] 00:16:44.300 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.300 [2024-06-09 22:59:12.292701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.300 [2024-06-09 22:59:12.354834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.874 22:59:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:44.874 22:59:13 -- common/autotest_common.sh@852 -- # return 0 00:16:44.874 22:59:13 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:45.134 Nvme0n1 00:16:45.134 22:59:13 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:45.395 [ 00:16:45.395 { 00:16:45.395 "name": "Nvme0n1", 00:16:45.395 "aliases": [ 00:16:45.395 "6de6a2fb-06c0-4efb-9811-2a16bc6b615f" 00:16:45.395 ], 00:16:45.395 "product_name": "NVMe disk", 00:16:45.395 "block_size": 4096, 00:16:45.395 "num_blocks": 38912, 00:16:45.395 "uuid": "6de6a2fb-06c0-4efb-9811-2a16bc6b615f", 00:16:45.395 "assigned_rate_limits": { 00:16:45.395 "rw_ios_per_sec": 0, 00:16:45.395 "rw_mbytes_per_sec": 0, 00:16:45.395 "r_mbytes_per_sec": 0, 00:16:45.395 "w_mbytes_per_sec": 0 00:16:45.395 }, 00:16:45.395 "claimed": false, 00:16:45.395 "zoned": false, 00:16:45.395 "supported_io_types": { 00:16:45.395 "read": true, 00:16:45.395 "write": true, 00:16:45.395 "unmap": true, 00:16:45.395 "write_zeroes": true, 00:16:45.395 "flush": true, 00:16:45.395 "reset": true, 00:16:45.395 "compare": true, 00:16:45.395 "compare_and_write": true, 00:16:45.395 "abort": true, 00:16:45.395 "nvme_admin": true, 00:16:45.395 "nvme_io": true 00:16:45.395 }, 00:16:45.395 "driver_specific": { 00:16:45.395 "nvme": [ 00:16:45.395 { 00:16:45.395 "trid": { 00:16:45.395 "trtype": "TCP", 00:16:45.395 "adrfam": "IPv4", 00:16:45.395 "traddr": "10.0.0.2", 00:16:45.395 "trsvcid": "4420", 00:16:45.395 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:45.395 }, 00:16:45.395 "ctrlr_data": { 00:16:45.395 "cntlid": 1, 00:16:45.395 "vendor_id": "0x8086", 00:16:45.395 "model_number": "SPDK bdev Controller", 00:16:45.395 "serial_number": "SPDK0", 00:16:45.395 "firmware_revision": "24.01.1", 00:16:45.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:45.395 "oacs": { 00:16:45.395 "security": 0, 00:16:45.395 "format": 0, 00:16:45.395 "firmware": 0, 00:16:45.395 "ns_manage": 0 00:16:45.395 }, 00:16:45.395 "multi_ctrlr": true, 00:16:45.395 "ana_reporting": false 00:16:45.395 }, 00:16:45.395 "vs": { 00:16:45.395 "nvme_version": "1.3" 00:16:45.395 }, 00:16:45.395 "ns_data": { 00:16:45.395 "id": 1, 00:16:45.395 "can_share": true 00:16:45.395 } 00:16:45.395 } 00:16:45.395 ], 00:16:45.395 "mp_policy": "active_passive" 00:16:45.395 } 00:16:45.395 } 00:16:45.395 ] 00:16:45.395 22:59:13 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4062217 00:16:45.395 22:59:13 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:45.395 22:59:13 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:45.395 Running I/O for 10 seconds... 00:16:46.336 Latency(us) 00:16:46.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.336 Nvme0n1 : 1.00 17927.00 70.03 0.00 0.00 0.00 0.00 0.00 00:16:46.336 =================================================================================================================== 00:16:46.336 Total : 17927.00 70.03 0.00 0.00 0.00 0.00 0.00 00:16:46.336 00:16:47.279 22:59:15 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8fbb1cd1-633b-4509-b99a-5ecb38373eea 00:16:47.540 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:47.540 Nvme0n1 : 2.00 18051.50 70.51 0.00 0.00 0.00 0.00 0.00 00:16:47.540 =================================================================================================================== 00:16:47.540 Total : 18051.50 70.51 0.00 0.00 0.00 0.00 0.00 00:16:47.540 00:16:47.540 true 00:16:47.540 22:59:15 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fbb1cd1-633b-4509-b99a-5ecb38373eea 00:16:47.540 22:59:15 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:47.540 22:59:15 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:47.541 22:59:15 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:47.541 22:59:15 -- target/nvmf_lvs_grow.sh@65 -- # wait 4062217 00:16:48.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.485 Nvme0n1 : 3.00 18101.00 70.71 0.00 0.00 0.00 0.00 0.00 00:16:48.485 =================================================================================================================== 00:16:48.485 Total : 18101.00 70.71 0.00 0.00 0.00 0.00 0.00 00:16:48.485 00:16:49.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.428 Nvme0n1 : 4.00 18141.75 70.87 0.00 0.00 0.00 0.00 0.00 00:16:49.429 =================================================================================================================== 00:16:49.429 Total : 18141.75 70.87 0.00 0.00 0.00 0.00 0.00 00:16:49.429 00:16:50.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.373 Nvme0n1 : 5.00 18172.60 70.99 0.00 0.00 0.00 0.00 0.00 00:16:50.373 =================================================================================================================== 00:16:50.373 Total : 18172.60 70.99 0.00 0.00 0.00 0.00 0.00 00:16:50.373 00:16:51.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:51.759 Nvme0n1 : 6.00 18193.17 71.07 0.00 0.00 0.00 0.00 0.00 00:16:51.759 =================================================================================================================== 00:16:51.759 Total : 18193.17 71.07 0.00 0.00 0.00 0.00 0.00 00:16:51.759 00:16:52.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.702 Nvme0n1 : 7.00 18209.00 71.13 0.00 0.00 0.00 0.00 0.00 00:16:52.702 =================================================================================================================== 00:16:52.702 Total : 18209.00 71.13 0.00 0.00 0.00 0.00 0.00 00:16:52.702 00:16:53.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:53.644 Nvme0n1 : 8.00 18222.88 71.18 0.00 0.00 0.00 0.00 0.00 00:16:53.644 =================================================================================================================== 00:16:53.644 Total : 18222.88 71.18 0.00 0.00 0.00 0.00 0.00 00:16:53.644 00:16:54.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:54.587 Nvme0n1 : 9.00 18231.89 71.22 0.00 0.00 0.00 0.00 0.00 00:16:54.587 =================================================================================================================== 00:16:54.587 Total : 18231.89 71.22 0.00 0.00 0.00 0.00 0.00 00:16:54.587 00:16:55.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.528 Nvme0n1 : 10.00 18239.90 71.25 0.00 0.00 0.00 0.00 0.00 00:16:55.528 =================================================================================================================== 00:16:55.528 Total : 18239.90 71.25 0.00 0.00 0.00 0.00 0.00 00:16:55.528 00:16:55.528 00:16:55.528 Latency(us) 00:16:55.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.528 Nvme0n1 : 10.01 18239.92 71.25 0.00 0.00 7012.90 2321.07 15728.64 00:16:55.528 =================================================================================================================== 00:16:55.528 Total : 18239.92 71.25 0.00 0.00 7012.90 2321.07 15728.64 00:16:55.528 0 00:16:55.528 22:59:23 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4062058 00:16:55.528 22:59:23 -- common/autotest_common.sh@926 -- # '[' -z 4062058 ']' 00:16:55.528 22:59:23 -- common/autotest_common.sh@930 -- # kill -0 4062058 00:16:55.528 22:59:23 -- common/autotest_common.sh@931 -- # uname 00:16:55.528 22:59:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:55.528 22:59:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4062058 00:16:55.528 22:59:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:55.528 22:59:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:55.528 22:59:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4062058' 00:16:55.528 killing process with pid 4062058 00:16:55.528 22:59:23 -- common/autotest_common.sh@945 -- # kill 4062058 00:16:55.528 Received shutdown signal, test time was about 10.000000 seconds 00:16:55.528 00:16:55.528 Latency(us) 00:16:55.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.528 =================================================================================================================== 00:16:55.528 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:55.528 22:59:23 -- common/autotest_common.sh@950 -- # wait 4062058 00:16:55.788 22:59:23 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:55.788 22:59:23 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fbb1cd1-633b-4509-b99a-5ecb38373eea 00:16:55.788 22:59:23 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:56.049 22:59:24 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:56.049 22:59:24 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:16:56.049 22:59:24 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 4058401 00:16:56.049 22:59:24 -- target/nvmf_lvs_grow.sh@74 -- # wait 4058401 00:16:56.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 4058401 Killed "${NVMF_APP[@]}" "$@" 00:16:56.049 22:59:24 -- target/nvmf_lvs_grow.sh@74 -- # true 00:16:56.049 22:59:24 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:16:56.049 22:59:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:56.049 22:59:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:56.049 22:59:24 -- common/autotest_common.sh@10 -- # set +x 00:16:56.049 22:59:24 -- nvmf/common.sh@469 -- # nvmfpid=4064378 00:16:56.049 22:59:24 -- nvmf/common.sh@470 -- # waitforlisten 4064378 00:16:56.049 22:59:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:56.049 22:59:24 -- common/autotest_common.sh@819 -- # '[' -z 4064378 ']' 00:16:56.049 22:59:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.049 22:59:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:56.049 22:59:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.049 22:59:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:56.049 22:59:24 -- common/autotest_common.sh@10 -- # set +x 00:16:56.049 [2024-06-09 22:59:24.156411] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:56.049 [2024-06-09 22:59:24.156463] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.049 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.049 [2024-06-09 22:59:24.220312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.310 [2024-06-09 22:59:24.283742] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:56.310 [2024-06-09 22:59:24.283861] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.310 [2024-06-09 22:59:24.283868] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.310 [2024-06-09 22:59:24.283875] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.310 [2024-06-09 22:59:24.283894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.881 22:59:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:56.881 22:59:24 -- common/autotest_common.sh@852 -- # return 0 00:16:56.881 22:59:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:56.881 22:59:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:56.881 22:59:24 -- common/autotest_common.sh@10 -- # set +x 00:16:56.881 22:59:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:56.881 22:59:24 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:57.142 [2024-06-09 22:59:25.100739] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:57.142 [2024-06-09 22:59:25.100829] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:57.142 [2024-06-09 22:59:25.100858] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:57.142 22:59:25 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:16:57.142 22:59:25 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 6de6a2fb-06c0-4efb-9811-2a16bc6b615f 00:16:57.142 22:59:25 -- common/autotest_common.sh@887 -- # local bdev_name=6de6a2fb-06c0-4efb-9811-2a16bc6b615f 00:16:57.142 22:59:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:57.142 22:59:25 -- common/autotest_common.sh@889 -- # local i 00:16:57.142 22:59:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:57.142 22:59:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:57.142 22:59:25 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:57.142 22:59:25 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6de6a2fb-06c0-4efb-9811-2a16bc6b615f -t 2000 00:16:57.404 [ 00:16:57.404 { 00:16:57.404 "name": "6de6a2fb-06c0-4efb-9811-2a16bc6b615f", 00:16:57.404 "aliases": [ 00:16:57.404 "lvs/lvol" 00:16:57.404 ], 00:16:57.404 "product_name": "Logical Volume", 00:16:57.404 "block_size": 4096, 00:16:57.404 "num_blocks": 38912, 00:16:57.404 "uuid": "6de6a2fb-06c0-4efb-9811-2a16bc6b615f", 00:16:57.404 "assigned_rate_limits": { 00:16:57.404 "rw_ios_per_sec": 0, 00:16:57.404 "rw_mbytes_per_sec": 0, 00:16:57.404 "r_mbytes_per_sec": 0, 00:16:57.404 "w_mbytes_per_sec": 0 00:16:57.404 }, 00:16:57.404 "claimed": false, 00:16:57.404 "zoned": false, 00:16:57.404 "supported_io_types": { 00:16:57.404 "read": true, 00:16:57.404 "write": true, 00:16:57.404 "unmap": true, 00:16:57.404 "write_zeroes": true, 00:16:57.404 "flush": false, 00:16:57.404 "reset": true, 00:16:57.404 "compare": false, 00:16:57.404 "compare_and_write": false, 00:16:57.404 "abort": false, 00:16:57.404 "nvme_admin": false, 00:16:57.404 "nvme_io": false 00:16:57.404 }, 00:16:57.404 "driver_specific": { 00:16:57.404 "lvol": { 00:16:57.404 "lvol_store_uuid": "8fbb1cd1-633b-4509-b99a-5ecb38373eea", 00:16:57.404 "base_bdev": "aio_bdev", 00:16:57.404 "thin_provision": false, 00:16:57.404 "snapshot": false, 00:16:57.404 "clone": false, 00:16:57.404 "esnap_clone": false 00:16:57.404 } 00:16:57.404 } 00:16:57.404 } 00:16:57.404 ] 00:16:57.404 22:59:25 -- common/autotest_common.sh@895 -- # return 0 00:16:57.404 22:59:25 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fbb1cd1-633b-4509-b99a-5ecb38373eea 00:16:57.404 22:59:25 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:16:57.665 22:59:25 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:16:57.665 22:59:25 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fbb1cd1-633b-4509-b99a-5ecb38373eea 00:16:57.665 22:59:25 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:16:57.665 22:59:25 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:16:57.665 22:59:25 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:57.926 [2024-06-09 22:59:25.876695] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:57.926 22:59:25 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fbb1cd1-633b-4509-b99a-5ecb38373eea 00:16:57.926 22:59:25 -- common/autotest_common.sh@640 -- # local es=0 00:16:57.926 22:59:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fbb1cd1-633b-4509-b99a-5ecb38373eea 00:16:57.926 22:59:25 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:57.926 22:59:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:57.926 22:59:25 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:57.926 22:59:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:57.926 22:59:25 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:57.926 22:59:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:57.926 22:59:25 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:57.926 22:59:25 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:57.926 22:59:25 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fbb1cd1-633b-4509-b99a-5ecb38373eea 00:16:57.926 request: 00:16:57.926 { 00:16:57.926 "uuid": "8fbb1cd1-633b-4509-b99a-5ecb38373eea", 00:16:57.926 "method": "bdev_lvol_get_lvstores", 00:16:57.926 "req_id": 1 00:16:57.926 } 00:16:57.926 Got JSON-RPC error response 00:16:57.926 response: 00:16:57.926 { 00:16:57.926 "code": -19, 00:16:57.926 "message": "No such device" 00:16:57.926 } 00:16:57.926 22:59:26 -- common/autotest_common.sh@643 -- # es=1 00:16:57.926 22:59:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:57.926 22:59:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:57.926 22:59:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:57.926 22:59:26 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:58.187 aio_bdev 00:16:58.187 22:59:26 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 6de6a2fb-06c0-4efb-9811-2a16bc6b615f 00:16:58.187 22:59:26 -- common/autotest_common.sh@887 -- # local bdev_name=6de6a2fb-06c0-4efb-9811-2a16bc6b615f 00:16:58.187 22:59:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:58.187 22:59:26 -- common/autotest_common.sh@889 -- # local i 00:16:58.187 22:59:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:58.187 22:59:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:58.187 22:59:26 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:58.449 22:59:26 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6de6a2fb-06c0-4efb-9811-2a16bc6b615f -t 2000 00:16:58.449 [ 00:16:58.449 { 00:16:58.449 "name": "6de6a2fb-06c0-4efb-9811-2a16bc6b615f", 00:16:58.449 "aliases": [ 00:16:58.449 "lvs/lvol" 00:16:58.449 ], 00:16:58.449 "product_name": "Logical Volume", 00:16:58.449 "block_size": 4096, 00:16:58.449 "num_blocks": 38912, 00:16:58.449 "uuid": "6de6a2fb-06c0-4efb-9811-2a16bc6b615f", 00:16:58.449 "assigned_rate_limits": { 00:16:58.449 "rw_ios_per_sec": 0, 00:16:58.449 "rw_mbytes_per_sec": 0, 00:16:58.449 "r_mbytes_per_sec": 0, 00:16:58.449 "w_mbytes_per_sec": 0 00:16:58.449 }, 00:16:58.449 "claimed": false, 00:16:58.449 "zoned": false, 00:16:58.449 "supported_io_types": { 00:16:58.449 "read": true, 00:16:58.449 "write": true, 00:16:58.449 "unmap": true, 00:16:58.449 "write_zeroes": true, 00:16:58.449 "flush": false, 00:16:58.449 "reset": true, 00:16:58.449 "compare": false, 00:16:58.449 "compare_and_write": false, 00:16:58.449 "abort": false, 00:16:58.449 "nvme_admin": false, 00:16:58.449 "nvme_io": false 00:16:58.449 }, 00:16:58.449 "driver_specific": { 00:16:58.449 "lvol": { 00:16:58.449 "lvol_store_uuid": "8fbb1cd1-633b-4509-b99a-5ecb38373eea", 00:16:58.449 "base_bdev": "aio_bdev", 00:16:58.449 "thin_provision": false, 00:16:58.449 "snapshot": false, 00:16:58.449 "clone": false, 00:16:58.449 "esnap_clone": false 00:16:58.449 } 00:16:58.449 } 00:16:58.449 } 00:16:58.449 ] 00:16:58.449 22:59:26 -- common/autotest_common.sh@895 -- # return 0 00:16:58.449 22:59:26 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fbb1cd1-633b-4509-b99a-5ecb38373eea 00:16:58.449 22:59:26 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:58.710 22:59:26 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:58.710 22:59:26 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fbb1cd1-633b-4509-b99a-5ecb38373eea 00:16:58.710 22:59:26 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:58.710 22:59:26 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:58.710 22:59:26 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6de6a2fb-06c0-4efb-9811-2a16bc6b615f 00:16:58.971 22:59:26 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8fbb1cd1-633b-4509-b99a-5ecb38373eea 00:16:58.971 22:59:27 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:59.233 22:59:27 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:59.233 00:16:59.233 real 0m16.641s 00:16:59.233 user 0m43.311s 00:16:59.233 sys 0m2.949s 00:16:59.233 22:59:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:59.233 22:59:27 -- common/autotest_common.sh@10 -- # set +x 00:16:59.233 ************************************ 00:16:59.233 END TEST lvs_grow_dirty 00:16:59.233 ************************************ 00:16:59.233 22:59:27 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:59.233 22:59:27 -- common/autotest_common.sh@796 -- # type=--id 00:16:59.233 22:59:27 -- common/autotest_common.sh@797 -- # id=0 00:16:59.233 22:59:27 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:16:59.233 22:59:27 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:59.233 22:59:27 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:16:59.233 22:59:27 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:16:59.233 22:59:27 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:16:59.233 22:59:27 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:59.233 nvmf_trace.0 00:16:59.233 22:59:27 -- common/autotest_common.sh@811 -- # return 0 00:16:59.233 22:59:27 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:59.233 22:59:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:59.233 22:59:27 -- nvmf/common.sh@116 -- # sync 00:16:59.494 22:59:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:59.494 22:59:27 -- nvmf/common.sh@119 -- # set +e 00:16:59.494 22:59:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:59.494 22:59:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:59.494 rmmod nvme_tcp 00:16:59.494 rmmod nvme_fabrics 00:16:59.494 rmmod nvme_keyring 00:16:59.494 22:59:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:59.494 22:59:27 -- nvmf/common.sh@123 -- # set -e 00:16:59.494 22:59:27 -- nvmf/common.sh@124 -- # return 0 00:16:59.494 22:59:27 -- nvmf/common.sh@477 -- # '[' -n 4064378 ']' 00:16:59.494 22:59:27 -- nvmf/common.sh@478 -- # killprocess 4064378 00:16:59.494 22:59:27 -- common/autotest_common.sh@926 -- # '[' -z 4064378 ']' 00:16:59.494 22:59:27 -- common/autotest_common.sh@930 -- # kill -0 4064378 00:16:59.494 22:59:27 -- common/autotest_common.sh@931 -- # uname 00:16:59.494 22:59:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:59.494 22:59:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4064378 00:16:59.494 22:59:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:59.494 22:59:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:59.494 22:59:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4064378' 00:16:59.494 killing process with pid 4064378 00:16:59.494 22:59:27 -- common/autotest_common.sh@945 -- # kill 4064378 00:16:59.494 22:59:27 -- common/autotest_common.sh@950 -- # wait 4064378 00:16:59.494 22:59:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:59.494 22:59:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:59.494 22:59:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:59.494 22:59:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:59.494 22:59:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:59.494 22:59:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.494 22:59:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.494 22:59:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.090 22:59:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:02.090 00:17:02.090 real 0m42.515s 00:17:02.090 user 1m3.811s 00:17:02.090 sys 0m10.043s 00:17:02.090 22:59:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:02.090 22:59:29 -- common/autotest_common.sh@10 -- # set +x 00:17:02.090 ************************************ 00:17:02.090 END TEST nvmf_lvs_grow 00:17:02.090 ************************************ 00:17:02.090 22:59:29 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:02.090 22:59:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:02.090 22:59:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:02.090 22:59:29 -- common/autotest_common.sh@10 -- # set +x 00:17:02.090 ************************************ 00:17:02.090 START TEST nvmf_bdev_io_wait 00:17:02.090 ************************************ 00:17:02.090 22:59:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:02.090 * Looking for test storage... 00:17:02.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:02.090 22:59:29 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.090 22:59:29 -- nvmf/common.sh@7 -- # uname -s 00:17:02.090 22:59:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.090 22:59:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.090 22:59:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.090 22:59:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.090 22:59:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.090 22:59:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.090 22:59:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.090 22:59:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.090 22:59:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.090 22:59:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.090 22:59:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:02.090 22:59:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:02.090 22:59:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.090 22:59:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.090 22:59:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.090 22:59:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.090 22:59:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.090 22:59:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.090 22:59:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.090 22:59:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.090 22:59:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.090 22:59:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.090 22:59:29 -- paths/export.sh@5 -- # export PATH 00:17:02.090 22:59:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.090 22:59:29 -- nvmf/common.sh@46 -- # : 0 00:17:02.090 22:59:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:02.090 22:59:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:02.090 22:59:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:02.090 22:59:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.090 22:59:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.090 22:59:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:02.090 22:59:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:02.090 22:59:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:02.090 22:59:29 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:02.090 22:59:29 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:02.090 22:59:29 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:02.090 22:59:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:02.090 22:59:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.090 22:59:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:02.090 22:59:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:02.090 22:59:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:02.090 22:59:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.090 22:59:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.090 22:59:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.090 22:59:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:02.090 22:59:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:02.090 22:59:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:02.090 22:59:29 -- common/autotest_common.sh@10 -- # set +x 00:17:08.678 22:59:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:08.678 22:59:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:08.678 22:59:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:08.678 22:59:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:08.678 22:59:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:08.678 22:59:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:08.678 22:59:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:08.678 22:59:36 -- nvmf/common.sh@294 -- # net_devs=() 00:17:08.678 22:59:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:08.678 22:59:36 -- nvmf/common.sh@295 -- # e810=() 00:17:08.678 22:59:36 -- nvmf/common.sh@295 -- # local -ga e810 00:17:08.678 22:59:36 -- nvmf/common.sh@296 -- # x722=() 00:17:08.678 22:59:36 -- nvmf/common.sh@296 -- # local -ga x722 00:17:08.678 22:59:36 -- nvmf/common.sh@297 -- # mlx=() 00:17:08.678 22:59:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:08.678 22:59:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.678 22:59:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.678 22:59:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.678 22:59:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.678 22:59:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.678 22:59:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.678 22:59:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.678 22:59:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.678 22:59:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.678 22:59:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.678 22:59:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.678 22:59:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:08.678 22:59:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:08.678 22:59:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:08.678 22:59:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:08.678 22:59:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:08.678 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:08.678 22:59:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:08.678 22:59:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:08.678 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:08.678 22:59:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:08.678 22:59:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:08.678 22:59:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.678 22:59:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:08.678 22:59:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.678 22:59:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:08.678 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:08.678 22:59:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.678 22:59:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:08.678 22:59:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.678 22:59:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:08.678 22:59:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.678 22:59:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:08.678 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:08.678 22:59:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.678 22:59:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:08.678 22:59:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:08.678 22:59:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:08.678 22:59:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:08.678 22:59:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.678 22:59:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.678 22:59:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.678 22:59:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:08.678 22:59:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.678 22:59:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.678 22:59:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:08.678 22:59:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.678 22:59:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.678 22:59:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:08.678 22:59:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:08.678 22:59:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.678 22:59:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.678 22:59:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.678 22:59:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.678 22:59:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:08.939 22:59:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.939 22:59:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.939 22:59:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.939 22:59:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:08.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:17:08.939 00:17:08.939 --- 10.0.0.2 ping statistics --- 00:17:08.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.939 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:17:08.939 22:59:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.440 ms 00:17:08.939 00:17:08.939 --- 10.0.0.1 ping statistics --- 00:17:08.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.939 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:17:08.939 22:59:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.939 22:59:37 -- nvmf/common.sh@410 -- # return 0 00:17:08.939 22:59:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:08.939 22:59:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.939 22:59:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:08.939 22:59:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:08.939 22:59:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.939 22:59:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:08.939 22:59:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:08.939 22:59:37 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:08.939 22:59:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:08.939 22:59:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:08.939 22:59:37 -- common/autotest_common.sh@10 -- # set +x 00:17:08.939 22:59:37 -- nvmf/common.sh@469 -- # nvmfpid=4069339 00:17:08.939 22:59:37 -- nvmf/common.sh@470 -- # waitforlisten 4069339 00:17:08.939 22:59:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:08.939 22:59:37 -- common/autotest_common.sh@819 -- # '[' -z 4069339 ']' 00:17:08.939 22:59:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.939 22:59:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:08.939 22:59:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.939 22:59:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:08.939 22:59:37 -- common/autotest_common.sh@10 -- # set +x 00:17:08.939 [2024-06-09 22:59:37.098534] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:08.939 [2024-06-09 22:59:37.098594] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.212 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.212 [2024-06-09 22:59:37.167917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.212 [2024-06-09 22:59:37.242315] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:09.212 [2024-06-09 22:59:37.242453] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.212 [2024-06-09 22:59:37.242465] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.212 [2024-06-09 22:59:37.242473] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.212 [2024-06-09 22:59:37.242579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.212 [2024-06-09 22:59:37.242687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.212 [2024-06-09 22:59:37.242847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.212 [2024-06-09 22:59:37.242847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.783 22:59:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:09.783 22:59:37 -- common/autotest_common.sh@852 -- # return 0 00:17:09.783 22:59:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:09.783 22:59:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:09.783 22:59:37 -- common/autotest_common.sh@10 -- # set +x 00:17:09.783 22:59:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.783 22:59:37 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:09.783 22:59:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.783 22:59:37 -- common/autotest_common.sh@10 -- # set +x 00:17:09.783 22:59:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:09.783 22:59:37 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:09.784 22:59:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.784 22:59:37 -- common/autotest_common.sh@10 -- # set +x 00:17:10.044 22:59:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.044 22:59:37 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:10.044 22:59:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.044 22:59:37 -- common/autotest_common.sh@10 -- # set +x 00:17:10.044 [2024-06-09 22:59:37.978710] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.044 22:59:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.044 22:59:37 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:10.044 22:59:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.044 22:59:37 -- common/autotest_common.sh@10 -- # set +x 00:17:10.044 Malloc0 00:17:10.044 22:59:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.044 22:59:38 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:10.044 22:59:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.044 22:59:38 -- common/autotest_common.sh@10 -- # set +x 00:17:10.044 22:59:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.044 22:59:38 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:10.044 22:59:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.044 22:59:38 -- common/autotest_common.sh@10 -- # set +x 00:17:10.044 22:59:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.044 22:59:38 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.044 22:59:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.044 22:59:38 -- common/autotest_common.sh@10 -- # set +x 00:17:10.044 [2024-06-09 22:59:38.043730] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.044 22:59:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.044 22:59:38 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4069433 00:17:10.044 22:59:38 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:10.044 22:59:38 -- target/bdev_io_wait.sh@30 -- # READ_PID=4069436 00:17:10.044 22:59:38 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:10.044 22:59:38 -- nvmf/common.sh@520 -- # config=() 00:17:10.044 22:59:38 -- nvmf/common.sh@520 -- # local subsystem config 00:17:10.044 22:59:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:10.044 22:59:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:10.044 { 00:17:10.044 "params": { 00:17:10.044 "name": "Nvme$subsystem", 00:17:10.044 "trtype": "$TEST_TRANSPORT", 00:17:10.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.044 "adrfam": "ipv4", 00:17:10.044 "trsvcid": "$NVMF_PORT", 00:17:10.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.044 "hdgst": ${hdgst:-false}, 00:17:10.044 "ddgst": ${ddgst:-false} 00:17:10.044 }, 00:17:10.044 "method": "bdev_nvme_attach_controller" 00:17:10.044 } 00:17:10.044 EOF 00:17:10.044 )") 00:17:10.044 22:59:38 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4069440 00:17:10.044 22:59:38 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:10.044 22:59:38 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:10.044 22:59:38 -- nvmf/common.sh@520 -- # config=() 00:17:10.044 22:59:38 -- nvmf/common.sh@520 -- # local subsystem config 00:17:10.044 22:59:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:10.044 22:59:38 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4069443 00:17:10.044 22:59:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:10.044 { 00:17:10.044 "params": { 00:17:10.044 "name": "Nvme$subsystem", 00:17:10.044 "trtype": "$TEST_TRANSPORT", 00:17:10.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.045 "adrfam": "ipv4", 00:17:10.045 "trsvcid": "$NVMF_PORT", 00:17:10.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.045 "hdgst": ${hdgst:-false}, 00:17:10.045 "ddgst": ${ddgst:-false} 00:17:10.045 }, 00:17:10.045 "method": "bdev_nvme_attach_controller" 00:17:10.045 } 00:17:10.045 EOF 00:17:10.045 )") 00:17:10.045 22:59:38 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:10.045 22:59:38 -- target/bdev_io_wait.sh@35 -- # sync 00:17:10.045 22:59:38 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:10.045 22:59:38 -- nvmf/common.sh@542 -- # cat 00:17:10.045 22:59:38 -- nvmf/common.sh@520 -- # config=() 00:17:10.045 22:59:38 -- nvmf/common.sh@520 -- # local subsystem config 00:17:10.045 22:59:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:10.045 22:59:38 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:10.045 22:59:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:10.045 { 00:17:10.045 "params": { 00:17:10.045 "name": "Nvme$subsystem", 00:17:10.045 "trtype": "$TEST_TRANSPORT", 00:17:10.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.045 "adrfam": "ipv4", 00:17:10.045 "trsvcid": "$NVMF_PORT", 00:17:10.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.045 "hdgst": ${hdgst:-false}, 00:17:10.045 "ddgst": ${ddgst:-false} 00:17:10.045 }, 00:17:10.045 "method": "bdev_nvme_attach_controller" 00:17:10.045 } 00:17:10.045 EOF 00:17:10.045 )") 00:17:10.045 22:59:38 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:10.045 22:59:38 -- nvmf/common.sh@520 -- # config=() 00:17:10.045 22:59:38 -- nvmf/common.sh@520 -- # local subsystem config 00:17:10.045 22:59:38 -- nvmf/common.sh@542 -- # cat 00:17:10.045 22:59:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:10.045 22:59:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:10.045 { 00:17:10.045 "params": { 00:17:10.045 "name": "Nvme$subsystem", 00:17:10.045 "trtype": "$TEST_TRANSPORT", 00:17:10.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.045 "adrfam": "ipv4", 00:17:10.045 "trsvcid": "$NVMF_PORT", 00:17:10.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.045 "hdgst": ${hdgst:-false}, 00:17:10.045 "ddgst": ${ddgst:-false} 00:17:10.045 }, 00:17:10.045 "method": "bdev_nvme_attach_controller" 00:17:10.045 } 00:17:10.045 EOF 00:17:10.045 )") 00:17:10.045 22:59:38 -- nvmf/common.sh@542 -- # cat 00:17:10.045 22:59:38 -- target/bdev_io_wait.sh@37 -- # wait 4069433 00:17:10.045 22:59:38 -- nvmf/common.sh@542 -- # cat 00:17:10.045 22:59:38 -- nvmf/common.sh@544 -- # jq . 00:17:10.045 22:59:38 -- nvmf/common.sh@544 -- # jq . 00:17:10.045 22:59:38 -- nvmf/common.sh@544 -- # jq . 00:17:10.045 22:59:38 -- nvmf/common.sh@545 -- # IFS=, 00:17:10.045 22:59:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:10.045 "params": { 00:17:10.045 "name": "Nvme1", 00:17:10.045 "trtype": "tcp", 00:17:10.045 "traddr": "10.0.0.2", 00:17:10.045 "adrfam": "ipv4", 00:17:10.045 "trsvcid": "4420", 00:17:10.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.045 "hdgst": false, 00:17:10.045 "ddgst": false 00:17:10.045 }, 00:17:10.045 "method": "bdev_nvme_attach_controller" 00:17:10.045 }' 00:17:10.045 22:59:38 -- nvmf/common.sh@544 -- # jq . 00:17:10.045 22:59:38 -- nvmf/common.sh@545 -- # IFS=, 00:17:10.045 22:59:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:10.045 "params": { 00:17:10.045 "name": "Nvme1", 00:17:10.045 "trtype": "tcp", 00:17:10.045 "traddr": "10.0.0.2", 00:17:10.045 "adrfam": "ipv4", 00:17:10.045 "trsvcid": "4420", 00:17:10.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.045 "hdgst": false, 00:17:10.045 "ddgst": false 00:17:10.045 }, 00:17:10.045 "method": "bdev_nvme_attach_controller" 00:17:10.045 }' 00:17:10.045 22:59:38 -- nvmf/common.sh@545 -- # IFS=, 00:17:10.045 22:59:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:10.045 "params": { 00:17:10.045 "name": "Nvme1", 00:17:10.045 "trtype": "tcp", 00:17:10.045 "traddr": "10.0.0.2", 00:17:10.045 "adrfam": "ipv4", 00:17:10.045 "trsvcid": "4420", 00:17:10.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.045 "hdgst": false, 00:17:10.045 "ddgst": false 00:17:10.045 }, 00:17:10.045 "method": "bdev_nvme_attach_controller" 00:17:10.045 }' 00:17:10.045 22:59:38 -- nvmf/common.sh@545 -- # IFS=, 00:17:10.045 22:59:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:10.045 "params": { 00:17:10.045 "name": "Nvme1", 00:17:10.045 "trtype": "tcp", 00:17:10.045 "traddr": "10.0.0.2", 00:17:10.045 "adrfam": "ipv4", 00:17:10.045 "trsvcid": "4420", 00:17:10.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.045 "hdgst": false, 00:17:10.045 "ddgst": false 00:17:10.045 }, 00:17:10.045 "method": "bdev_nvme_attach_controller" 00:17:10.045 }' 00:17:10.045 [2024-06-09 22:59:38.091872] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:10.045 [2024-06-09 22:59:38.091927] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:10.045 [2024-06-09 22:59:38.094733] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:10.045 [2024-06-09 22:59:38.094781] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:10.045 [2024-06-09 22:59:38.097203] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:10.045 [2024-06-09 22:59:38.097249] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:10.045 [2024-06-09 22:59:38.098078] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:10.045 [2024-06-09 22:59:38.098162] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:10.045 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.045 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.307 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.307 [2024-06-09 22:59:38.235922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.307 [2024-06-09 22:59:38.277214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.307 [2024-06-09 22:59:38.286067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:10.307 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.307 [2024-06-09 22:59:38.325352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:10.307 [2024-06-09 22:59:38.339636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.307 [2024-06-09 22:59:38.384973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.307 [2024-06-09 22:59:38.389916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:10.307 [2024-06-09 22:59:38.432547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:10.307 Running I/O for 1 seconds... 00:17:10.568 Running I/O for 1 seconds... 00:17:10.568 Running I/O for 1 seconds... 00:17:10.568 Running I/O for 1 seconds... 00:17:11.511 00:17:11.511 Latency(us) 00:17:11.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.511 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:11.511 Nvme1n1 : 1.01 9723.46 37.98 0.00 0.00 13075.75 4014.08 18350.08 00:17:11.511 =================================================================================================================== 00:17:11.511 Total : 9723.46 37.98 0.00 0.00 13075.75 4014.08 18350.08 00:17:11.511 00:17:11.511 Latency(us) 00:17:11.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.511 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:11.511 Nvme1n1 : 1.01 9160.88 35.78 0.00 0.00 13929.21 5625.17 29709.65 00:17:11.511 =================================================================================================================== 00:17:11.511 Total : 9160.88 35.78 0.00 0.00 13929.21 5625.17 29709.65 00:17:11.511 00:17:11.511 Latency(us) 00:17:11.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.511 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:11.511 Nvme1n1 : 1.00 11476.57 44.83 0.00 0.00 11129.74 3222.19 49152.00 00:17:11.511 =================================================================================================================== 00:17:11.511 Total : 11476.57 44.83 0.00 0.00 11129.74 3222.19 49152.00 00:17:11.511 00:17:11.511 Latency(us) 00:17:11.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.511 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:11.511 Nvme1n1 : 1.00 192841.52 753.29 0.00 0.00 661.23 262.83 737.28 00:17:11.511 =================================================================================================================== 00:17:11.511 Total : 192841.52 753.29 0.00 0.00 661.23 262.83 737.28 00:17:11.771 22:59:39 -- target/bdev_io_wait.sh@38 -- # wait 4069436 00:17:11.771 22:59:39 -- target/bdev_io_wait.sh@39 -- # wait 4069440 00:17:11.771 22:59:39 -- target/bdev_io_wait.sh@40 -- # wait 4069443 00:17:11.771 22:59:39 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:11.771 22:59:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:11.771 22:59:39 -- common/autotest_common.sh@10 -- # set +x 00:17:11.771 22:59:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:11.771 22:59:39 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:11.771 22:59:39 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:11.771 22:59:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:11.771 22:59:39 -- nvmf/common.sh@116 -- # sync 00:17:11.771 22:59:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:11.771 22:59:39 -- nvmf/common.sh@119 -- # set +e 00:17:11.771 22:59:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:11.771 22:59:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:11.771 rmmod nvme_tcp 00:17:11.771 rmmod nvme_fabrics 00:17:11.771 rmmod nvme_keyring 00:17:11.771 22:59:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:11.771 22:59:39 -- nvmf/common.sh@123 -- # set -e 00:17:11.771 22:59:39 -- nvmf/common.sh@124 -- # return 0 00:17:11.771 22:59:39 -- nvmf/common.sh@477 -- # '[' -n 4069339 ']' 00:17:11.771 22:59:39 -- nvmf/common.sh@478 -- # killprocess 4069339 00:17:11.771 22:59:39 -- common/autotest_common.sh@926 -- # '[' -z 4069339 ']' 00:17:11.771 22:59:39 -- common/autotest_common.sh@930 -- # kill -0 4069339 00:17:11.771 22:59:39 -- common/autotest_common.sh@931 -- # uname 00:17:11.771 22:59:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:11.771 22:59:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4069339 00:17:12.032 22:59:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:12.032 22:59:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:12.032 22:59:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4069339' 00:17:12.032 killing process with pid 4069339 00:17:12.032 22:59:39 -- common/autotest_common.sh@945 -- # kill 4069339 00:17:12.032 22:59:39 -- common/autotest_common.sh@950 -- # wait 4069339 00:17:12.032 22:59:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:12.032 22:59:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:12.032 22:59:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:12.032 22:59:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:12.032 22:59:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:12.032 22:59:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.032 22:59:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.032 22:59:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.580 22:59:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:14.580 00:17:14.580 real 0m12.397s 00:17:14.580 user 0m19.012s 00:17:14.580 sys 0m6.602s 00:17:14.580 22:59:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:14.580 22:59:42 -- common/autotest_common.sh@10 -- # set +x 00:17:14.580 ************************************ 00:17:14.580 END TEST nvmf_bdev_io_wait 00:17:14.580 ************************************ 00:17:14.580 22:59:42 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:14.580 22:59:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:14.580 22:59:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:14.580 22:59:42 -- common/autotest_common.sh@10 -- # set +x 00:17:14.580 ************************************ 00:17:14.580 START TEST nvmf_queue_depth 00:17:14.580 ************************************ 00:17:14.580 22:59:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:14.580 * Looking for test storage... 00:17:14.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.580 22:59:42 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.580 22:59:42 -- nvmf/common.sh@7 -- # uname -s 00:17:14.580 22:59:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.580 22:59:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.580 22:59:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.580 22:59:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.580 22:59:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.580 22:59:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.580 22:59:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.580 22:59:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.580 22:59:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.580 22:59:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.580 22:59:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.580 22:59:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.580 22:59:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.580 22:59:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.580 22:59:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.580 22:59:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.580 22:59:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.580 22:59:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.580 22:59:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.580 22:59:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.580 22:59:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.580 22:59:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.580 22:59:42 -- paths/export.sh@5 -- # export PATH 00:17:14.580 22:59:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.580 22:59:42 -- nvmf/common.sh@46 -- # : 0 00:17:14.580 22:59:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:14.580 22:59:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:14.580 22:59:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:14.580 22:59:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.580 22:59:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.580 22:59:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:14.580 22:59:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:14.580 22:59:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:14.580 22:59:42 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:14.580 22:59:42 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:14.580 22:59:42 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:14.580 22:59:42 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:14.580 22:59:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:14.580 22:59:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.580 22:59:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:14.580 22:59:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:14.580 22:59:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:14.580 22:59:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.580 22:59:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.580 22:59:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.580 22:59:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:14.580 22:59:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:14.580 22:59:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:14.580 22:59:42 -- common/autotest_common.sh@10 -- # set +x 00:17:21.173 22:59:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:21.173 22:59:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:21.173 22:59:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:21.173 22:59:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:21.173 22:59:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:21.173 22:59:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:21.173 22:59:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:21.173 22:59:48 -- nvmf/common.sh@294 -- # net_devs=() 00:17:21.173 22:59:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:21.173 22:59:48 -- nvmf/common.sh@295 -- # e810=() 00:17:21.173 22:59:48 -- nvmf/common.sh@295 -- # local -ga e810 00:17:21.173 22:59:48 -- nvmf/common.sh@296 -- # x722=() 00:17:21.173 22:59:48 -- nvmf/common.sh@296 -- # local -ga x722 00:17:21.173 22:59:48 -- nvmf/common.sh@297 -- # mlx=() 00:17:21.173 22:59:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:21.173 22:59:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.173 22:59:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.173 22:59:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.173 22:59:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.173 22:59:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.174 22:59:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.174 22:59:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.174 22:59:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.174 22:59:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.174 22:59:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.174 22:59:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.174 22:59:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:21.174 22:59:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:21.174 22:59:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:21.174 22:59:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:21.174 22:59:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:21.174 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:21.174 22:59:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:21.174 22:59:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:21.174 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:21.174 22:59:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:21.174 22:59:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:21.174 22:59:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.174 22:59:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:21.174 22:59:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.174 22:59:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:21.174 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:21.174 22:59:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.174 22:59:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:21.174 22:59:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.174 22:59:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:21.174 22:59:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.174 22:59:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:21.174 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:21.174 22:59:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.174 22:59:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:21.174 22:59:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:21.174 22:59:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:21.174 22:59:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:21.174 22:59:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.174 22:59:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.174 22:59:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.174 22:59:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:21.174 22:59:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.174 22:59:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.174 22:59:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:21.174 22:59:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.174 22:59:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.174 22:59:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:21.174 22:59:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:21.174 22:59:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.174 22:59:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.174 22:59:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.174 22:59:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.174 22:59:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:21.174 22:59:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:21.174 22:59:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:21.174 22:59:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:21.174 22:59:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:21.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:17:21.174 00:17:21.174 --- 10.0.0.2 ping statistics --- 00:17:21.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.174 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:17:21.174 22:59:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:21.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.430 ms 00:17:21.174 00:17:21.174 --- 10.0.0.1 ping statistics --- 00:17:21.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.174 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:17:21.174 22:59:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.174 22:59:49 -- nvmf/common.sh@410 -- # return 0 00:17:21.174 22:59:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:21.174 22:59:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.174 22:59:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:21.174 22:59:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:21.174 22:59:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.174 22:59:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:21.174 22:59:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:21.174 22:59:49 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:21.174 22:59:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:21.174 22:59:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:21.174 22:59:49 -- common/autotest_common.sh@10 -- # set +x 00:17:21.174 22:59:49 -- nvmf/common.sh@469 -- # nvmfpid=4074068 00:17:21.174 22:59:49 -- nvmf/common.sh@470 -- # waitforlisten 4074068 00:17:21.174 22:59:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:21.174 22:59:49 -- common/autotest_common.sh@819 -- # '[' -z 4074068 ']' 00:17:21.174 22:59:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.174 22:59:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:21.174 22:59:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.174 22:59:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:21.174 22:59:49 -- common/autotest_common.sh@10 -- # set +x 00:17:21.174 [2024-06-09 22:59:49.337088] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:21.174 [2024-06-09 22:59:49.337150] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.436 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.436 [2024-06-09 22:59:49.405544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.436 [2024-06-09 22:59:49.476943] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:21.436 [2024-06-09 22:59:49.477061] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.436 [2024-06-09 22:59:49.477069] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.436 [2024-06-09 22:59:49.477076] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.436 [2024-06-09 22:59:49.477096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.007 22:59:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:22.007 22:59:50 -- common/autotest_common.sh@852 -- # return 0 00:17:22.007 22:59:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:22.007 22:59:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:22.007 22:59:50 -- common/autotest_common.sh@10 -- # set +x 00:17:22.007 22:59:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.007 22:59:50 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:22.007 22:59:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:22.007 22:59:50 -- common/autotest_common.sh@10 -- # set +x 00:17:22.007 [2024-06-09 22:59:50.143476] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.007 22:59:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:22.007 22:59:50 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:22.007 22:59:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:22.007 22:59:50 -- common/autotest_common.sh@10 -- # set +x 00:17:22.007 Malloc0 00:17:22.007 22:59:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:22.007 22:59:50 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:22.007 22:59:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:22.007 22:59:50 -- common/autotest_common.sh@10 -- # set +x 00:17:22.269 22:59:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:22.269 22:59:50 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:22.269 22:59:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:22.269 22:59:50 -- common/autotest_common.sh@10 -- # set +x 00:17:22.269 22:59:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:22.269 22:59:50 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:22.269 22:59:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:22.269 22:59:50 -- common/autotest_common.sh@10 -- # set +x 00:17:22.269 [2024-06-09 22:59:50.210125] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.269 22:59:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:22.269 22:59:50 -- target/queue_depth.sh@30 -- # bdevperf_pid=4074113 00:17:22.269 22:59:50 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:22.269 22:59:50 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:22.269 22:59:50 -- target/queue_depth.sh@33 -- # waitforlisten 4074113 /var/tmp/bdevperf.sock 00:17:22.269 22:59:50 -- common/autotest_common.sh@819 -- # '[' -z 4074113 ']' 00:17:22.269 22:59:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:22.269 22:59:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:22.269 22:59:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:22.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:22.269 22:59:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:22.269 22:59:50 -- common/autotest_common.sh@10 -- # set +x 00:17:22.269 [2024-06-09 22:59:50.258905] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:22.269 [2024-06-09 22:59:50.258956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4074113 ] 00:17:22.269 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.269 [2024-06-09 22:59:50.316663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.269 [2024-06-09 22:59:50.379342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.841 22:59:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:22.841 22:59:51 -- common/autotest_common.sh@852 -- # return 0 00:17:22.841 22:59:51 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:22.841 22:59:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:22.841 22:59:51 -- common/autotest_common.sh@10 -- # set +x 00:17:23.102 NVMe0n1 00:17:23.102 22:59:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:23.102 22:59:51 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:23.102 Running I/O for 10 seconds... 00:17:33.176 00:17:33.176 Latency(us) 00:17:33.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.176 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:33.176 Verification LBA range: start 0x0 length 0x4000 00:17:33.176 NVMe0n1 : 10.06 14313.66 55.91 0.00 0.00 71275.99 15291.73 52647.25 00:17:33.176 =================================================================================================================== 00:17:33.176 Total : 14313.66 55.91 0.00 0.00 71275.99 15291.73 52647.25 00:17:33.176 0 00:17:33.176 23:00:01 -- target/queue_depth.sh@39 -- # killprocess 4074113 00:17:33.176 23:00:01 -- common/autotest_common.sh@926 -- # '[' -z 4074113 ']' 00:17:33.176 23:00:01 -- common/autotest_common.sh@930 -- # kill -0 4074113 00:17:33.176 23:00:01 -- common/autotest_common.sh@931 -- # uname 00:17:33.176 23:00:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:33.176 23:00:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4074113 00:17:33.441 23:00:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:33.441 23:00:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:33.441 23:00:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4074113' 00:17:33.441 killing process with pid 4074113 00:17:33.441 23:00:01 -- common/autotest_common.sh@945 -- # kill 4074113 00:17:33.441 Received shutdown signal, test time was about 10.000000 seconds 00:17:33.441 00:17:33.441 Latency(us) 00:17:33.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.441 =================================================================================================================== 00:17:33.441 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:33.441 23:00:01 -- common/autotest_common.sh@950 -- # wait 4074113 00:17:33.441 23:00:01 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:33.441 23:00:01 -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:33.441 23:00:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:33.441 23:00:01 -- nvmf/common.sh@116 -- # sync 00:17:33.441 23:00:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:33.441 23:00:01 -- nvmf/common.sh@119 -- # set +e 00:17:33.441 23:00:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:33.441 23:00:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:33.441 rmmod nvme_tcp 00:17:33.441 rmmod nvme_fabrics 00:17:33.441 rmmod nvme_keyring 00:17:33.441 23:00:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:33.441 23:00:01 -- nvmf/common.sh@123 -- # set -e 00:17:33.441 23:00:01 -- nvmf/common.sh@124 -- # return 0 00:17:33.441 23:00:01 -- nvmf/common.sh@477 -- # '[' -n 4074068 ']' 00:17:33.442 23:00:01 -- nvmf/common.sh@478 -- # killprocess 4074068 00:17:33.442 23:00:01 -- common/autotest_common.sh@926 -- # '[' -z 4074068 ']' 00:17:33.442 23:00:01 -- common/autotest_common.sh@930 -- # kill -0 4074068 00:17:33.442 23:00:01 -- common/autotest_common.sh@931 -- # uname 00:17:33.442 23:00:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:33.442 23:00:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4074068 00:17:33.706 23:00:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:33.706 23:00:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:33.706 23:00:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4074068' 00:17:33.706 killing process with pid 4074068 00:17:33.706 23:00:01 -- common/autotest_common.sh@945 -- # kill 4074068 00:17:33.706 23:00:01 -- common/autotest_common.sh@950 -- # wait 4074068 00:17:33.706 23:00:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:33.706 23:00:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:33.706 23:00:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:33.706 23:00:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.706 23:00:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:33.706 23:00:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.706 23:00:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.706 23:00:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.247 23:00:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:36.247 00:17:36.247 real 0m21.601s 00:17:36.247 user 0m25.308s 00:17:36.247 sys 0m6.274s 00:17:36.247 23:00:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:36.247 23:00:03 -- common/autotest_common.sh@10 -- # set +x 00:17:36.247 ************************************ 00:17:36.247 END TEST nvmf_queue_depth 00:17:36.247 ************************************ 00:17:36.247 23:00:03 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:36.247 23:00:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:36.247 23:00:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:36.247 23:00:03 -- common/autotest_common.sh@10 -- # set +x 00:17:36.247 ************************************ 00:17:36.247 START TEST nvmf_multipath 00:17:36.247 ************************************ 00:17:36.247 23:00:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:36.247 * Looking for test storage... 00:17:36.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.247 23:00:03 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.247 23:00:03 -- nvmf/common.sh@7 -- # uname -s 00:17:36.247 23:00:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.247 23:00:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.247 23:00:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.247 23:00:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.247 23:00:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.247 23:00:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.247 23:00:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.247 23:00:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.247 23:00:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.247 23:00:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.247 23:00:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.247 23:00:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.247 23:00:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.247 23:00:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.247 23:00:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.247 23:00:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.247 23:00:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.247 23:00:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.247 23:00:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.247 23:00:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.247 23:00:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.247 23:00:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.247 23:00:04 -- paths/export.sh@5 -- # export PATH 00:17:36.247 23:00:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.247 23:00:04 -- nvmf/common.sh@46 -- # : 0 00:17:36.247 23:00:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:36.248 23:00:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:36.248 23:00:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:36.248 23:00:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.248 23:00:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.248 23:00:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:36.248 23:00:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:36.248 23:00:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:36.248 23:00:04 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:36.248 23:00:04 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:36.248 23:00:04 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:36.248 23:00:04 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.248 23:00:04 -- target/multipath.sh@43 -- # nvmftestinit 00:17:36.248 23:00:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:36.248 23:00:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.248 23:00:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:36.248 23:00:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:36.248 23:00:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:36.248 23:00:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.248 23:00:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.248 23:00:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.248 23:00:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:36.248 23:00:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:36.248 23:00:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:36.248 23:00:04 -- common/autotest_common.sh@10 -- # set +x 00:17:42.839 23:00:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:42.839 23:00:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:42.839 23:00:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:42.839 23:00:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:42.839 23:00:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:42.839 23:00:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:42.839 23:00:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:42.839 23:00:10 -- nvmf/common.sh@294 -- # net_devs=() 00:17:42.839 23:00:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:42.839 23:00:10 -- nvmf/common.sh@295 -- # e810=() 00:17:42.839 23:00:10 -- nvmf/common.sh@295 -- # local -ga e810 00:17:42.839 23:00:10 -- nvmf/common.sh@296 -- # x722=() 00:17:42.839 23:00:10 -- nvmf/common.sh@296 -- # local -ga x722 00:17:42.839 23:00:10 -- nvmf/common.sh@297 -- # mlx=() 00:17:42.839 23:00:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:42.839 23:00:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.839 23:00:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.839 23:00:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.839 23:00:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.839 23:00:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.839 23:00:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.839 23:00:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.839 23:00:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.839 23:00:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.839 23:00:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.839 23:00:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.839 23:00:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:42.839 23:00:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:42.839 23:00:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:42.839 23:00:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:42.839 23:00:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:42.839 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:42.839 23:00:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:42.839 23:00:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:42.839 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:42.839 23:00:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:42.839 23:00:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:42.839 23:00:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.839 23:00:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:42.839 23:00:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.839 23:00:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:42.839 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:42.839 23:00:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.839 23:00:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:42.839 23:00:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.839 23:00:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:42.839 23:00:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.839 23:00:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:42.839 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:42.839 23:00:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.839 23:00:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:42.839 23:00:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:42.839 23:00:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:42.839 23:00:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:42.839 23:00:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.839 23:00:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.839 23:00:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.839 23:00:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:42.840 23:00:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.840 23:00:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.840 23:00:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:42.840 23:00:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.840 23:00:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.840 23:00:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:42.840 23:00:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:42.840 23:00:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.840 23:00:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:42.840 23:00:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:42.840 23:00:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:42.840 23:00:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:42.840 23:00:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.840 23:00:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.840 23:00:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.101 23:00:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:43.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:17:43.101 00:17:43.101 --- 10.0.0.2 ping statistics --- 00:17:43.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.101 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:17:43.101 23:00:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.430 ms 00:17:43.101 00:17:43.101 --- 10.0.0.1 ping statistics --- 00:17:43.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.101 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:17:43.101 23:00:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.101 23:00:11 -- nvmf/common.sh@410 -- # return 0 00:17:43.101 23:00:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:43.101 23:00:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.101 23:00:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:43.101 23:00:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:43.101 23:00:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.101 23:00:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:43.101 23:00:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:43.101 23:00:11 -- target/multipath.sh@45 -- # '[' -z ']' 00:17:43.101 23:00:11 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:43.101 only one NIC for nvmf test 00:17:43.101 23:00:11 -- target/multipath.sh@47 -- # nvmftestfini 00:17:43.101 23:00:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:43.101 23:00:11 -- nvmf/common.sh@116 -- # sync 00:17:43.101 23:00:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:43.101 23:00:11 -- nvmf/common.sh@119 -- # set +e 00:17:43.101 23:00:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:43.101 23:00:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:43.101 rmmod nvme_tcp 00:17:43.101 rmmod nvme_fabrics 00:17:43.101 rmmod nvme_keyring 00:17:43.101 23:00:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:43.101 23:00:11 -- nvmf/common.sh@123 -- # set -e 00:17:43.101 23:00:11 -- nvmf/common.sh@124 -- # return 0 00:17:43.101 23:00:11 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:17:43.101 23:00:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:43.101 23:00:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:43.101 23:00:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:43.101 23:00:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:43.101 23:00:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:43.101 23:00:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.101 23:00:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.101 23:00:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.648 23:00:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:45.648 23:00:13 -- target/multipath.sh@48 -- # exit 0 00:17:45.648 23:00:13 -- target/multipath.sh@1 -- # nvmftestfini 00:17:45.648 23:00:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:45.648 23:00:13 -- nvmf/common.sh@116 -- # sync 00:17:45.648 23:00:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:45.648 23:00:13 -- nvmf/common.sh@119 -- # set +e 00:17:45.648 23:00:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:45.648 23:00:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:45.648 23:00:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:45.648 23:00:13 -- nvmf/common.sh@123 -- # set -e 00:17:45.648 23:00:13 -- nvmf/common.sh@124 -- # return 0 00:17:45.648 23:00:13 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:17:45.648 23:00:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:45.648 23:00:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:45.648 23:00:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:45.648 23:00:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:45.648 23:00:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:45.648 23:00:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.648 23:00:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.648 23:00:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.648 23:00:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:45.648 00:17:45.648 real 0m9.392s 00:17:45.648 user 0m2.040s 00:17:45.648 sys 0m5.245s 00:17:45.648 23:00:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.648 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:17:45.648 ************************************ 00:17:45.648 END TEST nvmf_multipath 00:17:45.648 ************************************ 00:17:45.648 23:00:13 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:45.648 23:00:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:45.648 23:00:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:45.648 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:17:45.648 ************************************ 00:17:45.648 START TEST nvmf_zcopy 00:17:45.648 ************************************ 00:17:45.648 23:00:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:45.648 * Looking for test storage... 00:17:45.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.648 23:00:13 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.648 23:00:13 -- nvmf/common.sh@7 -- # uname -s 00:17:45.648 23:00:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.648 23:00:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.648 23:00:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.648 23:00:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.648 23:00:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.648 23:00:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.648 23:00:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.648 23:00:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.648 23:00:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.648 23:00:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.648 23:00:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.648 23:00:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.648 23:00:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.648 23:00:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.648 23:00:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.648 23:00:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.648 23:00:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.648 23:00:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.648 23:00:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.648 23:00:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.648 23:00:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.648 23:00:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.648 23:00:13 -- paths/export.sh@5 -- # export PATH 00:17:45.648 23:00:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.648 23:00:13 -- nvmf/common.sh@46 -- # : 0 00:17:45.648 23:00:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:45.648 23:00:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:45.648 23:00:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:45.648 23:00:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.648 23:00:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.648 23:00:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:45.648 23:00:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:45.648 23:00:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:45.648 23:00:13 -- target/zcopy.sh@12 -- # nvmftestinit 00:17:45.648 23:00:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:45.648 23:00:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.648 23:00:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:45.648 23:00:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:45.648 23:00:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:45.648 23:00:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.648 23:00:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.648 23:00:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.649 23:00:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:45.649 23:00:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:45.649 23:00:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:45.649 23:00:13 -- common/autotest_common.sh@10 -- # set +x 00:17:52.238 23:00:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:52.238 23:00:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:52.238 23:00:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:52.238 23:00:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:52.238 23:00:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:52.238 23:00:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:52.238 23:00:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:52.238 23:00:19 -- nvmf/common.sh@294 -- # net_devs=() 00:17:52.238 23:00:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:52.238 23:00:19 -- nvmf/common.sh@295 -- # e810=() 00:17:52.238 23:00:19 -- nvmf/common.sh@295 -- # local -ga e810 00:17:52.238 23:00:19 -- nvmf/common.sh@296 -- # x722=() 00:17:52.238 23:00:19 -- nvmf/common.sh@296 -- # local -ga x722 00:17:52.238 23:00:19 -- nvmf/common.sh@297 -- # mlx=() 00:17:52.238 23:00:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:52.238 23:00:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:52.238 23:00:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:52.238 23:00:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:52.238 23:00:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:52.238 23:00:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:52.238 23:00:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:52.238 23:00:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:52.238 23:00:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:52.238 23:00:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:52.238 23:00:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:52.238 23:00:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:52.238 23:00:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:52.238 23:00:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:52.238 23:00:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:52.238 23:00:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:52.238 23:00:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:52.238 23:00:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:52.238 23:00:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:52.238 23:00:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:52.238 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:52.238 23:00:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:52.238 23:00:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:52.238 23:00:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.238 23:00:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.238 23:00:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:52.238 23:00:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:52.238 23:00:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:52.238 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:52.238 23:00:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:52.238 23:00:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:52.238 23:00:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:52.238 23:00:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:52.238 23:00:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:52.238 23:00:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:52.238 23:00:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:52.238 23:00:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:52.238 23:00:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:52.238 23:00:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.238 23:00:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:52.238 23:00:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.238 23:00:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:52.238 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:52.238 23:00:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.238 23:00:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:52.238 23:00:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:52.238 23:00:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:52.238 23:00:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:52.238 23:00:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:52.238 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:52.238 23:00:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:52.238 23:00:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:52.239 23:00:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:52.239 23:00:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:52.239 23:00:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:52.239 23:00:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:52.239 23:00:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.239 23:00:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:52.239 23:00:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:52.239 23:00:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:52.239 23:00:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:52.239 23:00:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:52.239 23:00:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:52.239 23:00:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:52.239 23:00:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.239 23:00:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:52.239 23:00:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:52.239 23:00:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:52.239 23:00:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:52.239 23:00:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:52.239 23:00:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:52.239 23:00:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:52.239 23:00:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:52.239 23:00:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:52.239 23:00:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:52.239 23:00:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:52.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:17:52.239 00:17:52.239 --- 10.0.0.2 ping statistics --- 00:17:52.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.239 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:17:52.239 23:00:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:52.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.391 ms 00:17:52.239 00:17:52.239 --- 10.0.0.1 ping statistics --- 00:17:52.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.239 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:17:52.239 23:00:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.239 23:00:20 -- nvmf/common.sh@410 -- # return 0 00:17:52.239 23:00:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:52.239 23:00:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.239 23:00:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:52.239 23:00:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:52.239 23:00:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.239 23:00:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:52.239 23:00:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:52.239 23:00:20 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:52.239 23:00:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:52.239 23:00:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:52.239 23:00:20 -- common/autotest_common.sh@10 -- # set +x 00:17:52.239 23:00:20 -- nvmf/common.sh@469 -- # nvmfpid=4085223 00:17:52.239 23:00:20 -- nvmf/common.sh@470 -- # waitforlisten 4085223 00:17:52.239 23:00:20 -- common/autotest_common.sh@819 -- # '[' -z 4085223 ']' 00:17:52.239 23:00:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.239 23:00:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:52.239 23:00:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.239 23:00:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:52.239 23:00:20 -- common/autotest_common.sh@10 -- # set +x 00:17:52.239 23:00:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:52.239 [2024-06-09 23:00:20.287423] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:52.239 [2024-06-09 23:00:20.287512] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.239 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.239 [2024-06-09 23:00:20.357096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.500 [2024-06-09 23:00:20.428664] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:52.500 [2024-06-09 23:00:20.428783] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.500 [2024-06-09 23:00:20.428791] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.500 [2024-06-09 23:00:20.428798] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.500 [2024-06-09 23:00:20.428817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.072 23:00:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:53.072 23:00:21 -- common/autotest_common.sh@852 -- # return 0 00:17:53.072 23:00:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:53.072 23:00:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:53.072 23:00:21 -- common/autotest_common.sh@10 -- # set +x 00:17:53.072 23:00:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.072 23:00:21 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:53.072 23:00:21 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:53.072 23:00:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:53.072 23:00:21 -- common/autotest_common.sh@10 -- # set +x 00:17:53.072 [2024-06-09 23:00:21.083689] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.072 23:00:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:53.072 23:00:21 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:53.072 23:00:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:53.072 23:00:21 -- common/autotest_common.sh@10 -- # set +x 00:17:53.072 23:00:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:53.072 23:00:21 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.072 23:00:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:53.072 23:00:21 -- common/autotest_common.sh@10 -- # set +x 00:17:53.072 [2024-06-09 23:00:21.099826] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.072 23:00:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:53.072 23:00:21 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:53.072 23:00:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:53.072 23:00:21 -- common/autotest_common.sh@10 -- # set +x 00:17:53.072 23:00:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:53.072 23:00:21 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:53.072 23:00:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:53.072 23:00:21 -- common/autotest_common.sh@10 -- # set +x 00:17:53.072 malloc0 00:17:53.072 23:00:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:53.072 23:00:21 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:53.072 23:00:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:53.072 23:00:21 -- common/autotest_common.sh@10 -- # set +x 00:17:53.072 23:00:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:53.072 23:00:21 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:53.072 23:00:21 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:53.072 23:00:21 -- nvmf/common.sh@520 -- # config=() 00:17:53.072 23:00:21 -- nvmf/common.sh@520 -- # local subsystem config 00:17:53.072 23:00:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:53.072 23:00:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:53.072 { 00:17:53.072 "params": { 00:17:53.072 "name": "Nvme$subsystem", 00:17:53.072 "trtype": "$TEST_TRANSPORT", 00:17:53.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:53.072 "adrfam": "ipv4", 00:17:53.072 "trsvcid": "$NVMF_PORT", 00:17:53.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:53.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:53.072 "hdgst": ${hdgst:-false}, 00:17:53.072 "ddgst": ${ddgst:-false} 00:17:53.072 }, 00:17:53.072 "method": "bdev_nvme_attach_controller" 00:17:53.072 } 00:17:53.072 EOF 00:17:53.072 )") 00:17:53.072 23:00:21 -- nvmf/common.sh@542 -- # cat 00:17:53.072 23:00:21 -- nvmf/common.sh@544 -- # jq . 00:17:53.072 23:00:21 -- nvmf/common.sh@545 -- # IFS=, 00:17:53.072 23:00:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:53.072 "params": { 00:17:53.072 "name": "Nvme1", 00:17:53.072 "trtype": "tcp", 00:17:53.072 "traddr": "10.0.0.2", 00:17:53.072 "adrfam": "ipv4", 00:17:53.072 "trsvcid": "4420", 00:17:53.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.072 "hdgst": false, 00:17:53.072 "ddgst": false 00:17:53.072 }, 00:17:53.072 "method": "bdev_nvme_attach_controller" 00:17:53.072 }' 00:17:53.072 [2024-06-09 23:00:21.183284] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:53.072 [2024-06-09 23:00:21.183338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4085391 ] 00:17:53.072 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.072 [2024-06-09 23:00:21.242052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.332 [2024-06-09 23:00:21.304321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.332 Running I/O for 10 seconds... 00:18:03.367 00:18:03.368 Latency(us) 00:18:03.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.368 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:03.368 Verification LBA range: start 0x0 length 0x1000 00:18:03.368 Nvme1n1 : 10.01 10232.21 79.94 0.00 0.00 12478.16 1952.43 29709.65 00:18:03.368 =================================================================================================================== 00:18:03.368 Total : 10232.21 79.94 0.00 0.00 12478.16 1952.43 29709.65 00:18:03.629 23:00:31 -- target/zcopy.sh@39 -- # perfpid=4087425 00:18:03.629 23:00:31 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:03.629 23:00:31 -- common/autotest_common.sh@10 -- # set +x 00:18:03.629 23:00:31 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:03.629 23:00:31 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:03.629 23:00:31 -- nvmf/common.sh@520 -- # config=() 00:18:03.629 23:00:31 -- nvmf/common.sh@520 -- # local subsystem config 00:18:03.629 23:00:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:03.629 23:00:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:03.629 { 00:18:03.629 "params": { 00:18:03.629 "name": "Nvme$subsystem", 00:18:03.629 "trtype": "$TEST_TRANSPORT", 00:18:03.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.629 "adrfam": "ipv4", 00:18:03.629 "trsvcid": "$NVMF_PORT", 00:18:03.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.629 "hdgst": ${hdgst:-false}, 00:18:03.629 "ddgst": ${ddgst:-false} 00:18:03.629 }, 00:18:03.629 "method": "bdev_nvme_attach_controller" 00:18:03.629 } 00:18:03.629 EOF 00:18:03.629 )") 00:18:03.629 [2024-06-09 23:00:31.621924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.629 [2024-06-09 23:00:31.621957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.629 23:00:31 -- nvmf/common.sh@542 -- # cat 00:18:03.629 23:00:31 -- nvmf/common.sh@544 -- # jq . 00:18:03.629 [2024-06-09 23:00:31.629917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.629 [2024-06-09 23:00:31.629929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.629 23:00:31 -- nvmf/common.sh@545 -- # IFS=, 00:18:03.629 23:00:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:03.629 "params": { 00:18:03.629 "name": "Nvme1", 00:18:03.629 "trtype": "tcp", 00:18:03.629 "traddr": "10.0.0.2", 00:18:03.629 "adrfam": "ipv4", 00:18:03.629 "trsvcid": "4420", 00:18:03.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.629 "hdgst": false, 00:18:03.629 "ddgst": false 00:18:03.629 }, 00:18:03.629 "method": "bdev_nvme_attach_controller" 00:18:03.629 }' 00:18:03.629 [2024-06-09 23:00:31.637937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.629 [2024-06-09 23:00:31.637947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.629 [2024-06-09 23:00:31.645958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.629 [2024-06-09 23:00:31.645968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.629 [2024-06-09 23:00:31.653979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.629 [2024-06-09 23:00:31.653989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.629 [2024-06-09 23:00:31.661999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.629 [2024-06-09 23:00:31.662009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.629 [2024-06-09 23:00:31.668831] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:03.629 [2024-06-09 23:00:31.668896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4087425 ] 00:18:03.629 [2024-06-09 23:00:31.670022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.629 [2024-06-09 23:00:31.670032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.629 [2024-06-09 23:00:31.678045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.629 [2024-06-09 23:00:31.678060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.629 [2024-06-09 23:00:31.686066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.629 [2024-06-09 23:00:31.686077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.629 [2024-06-09 23:00:31.694087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.629 [2024-06-09 23:00:31.694097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.629 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.630 [2024-06-09 23:00:31.702109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.630 [2024-06-09 23:00:31.702119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.630 [2024-06-09 23:00:31.710130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.630 [2024-06-09 23:00:31.710140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.630 [2024-06-09 23:00:31.718151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.630 [2024-06-09 23:00:31.718160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.630 [2024-06-09 23:00:31.726171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.630 [2024-06-09 23:00:31.726181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.630 [2024-06-09 23:00:31.726666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.630 [2024-06-09 23:00:31.734194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.630 [2024-06-09 23:00:31.734205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.630 [2024-06-09 23:00:31.742215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.630 [2024-06-09 23:00:31.742226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.630 [2024-06-09 23:00:31.750237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.630 [2024-06-09 23:00:31.750248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.630 [2024-06-09 23:00:31.758258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.630 [2024-06-09 23:00:31.758270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.630 [2024-06-09 23:00:31.766279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.630 [2024-06-09 23:00:31.766291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.630 [2024-06-09 23:00:31.774300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.630 [2024-06-09 23:00:31.774310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.630 [2024-06-09 23:00:31.782321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.630 [2024-06-09 23:00:31.782331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.630 [2024-06-09 23:00:31.788604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.630 [2024-06-09 23:00:31.790341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.630 [2024-06-09 23:00:31.790352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.630 [2024-06-09 23:00:31.798365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.630 [2024-06-09 23:00:31.798375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.630 [2024-06-09 23:00:31.806392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.630 [2024-06-09 23:00:31.806410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.814414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.814425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.822434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.822451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.830455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.830466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.838478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.838488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.846498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.846507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.854520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.854530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.862578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.862595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.874583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.874596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.882603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.882615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.890624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.890637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.898646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.898659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.906672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.906685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.914690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.914700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.922716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.922733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.930733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.930744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 Running I/O for 5 seconds... 00:18:03.892 [2024-06-09 23:00:31.938760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.938776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.955780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.955800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.965985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.966005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.975739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.892 [2024-06-09 23:00:31.975758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.892 [2024-06-09 23:00:31.986303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.893 [2024-06-09 23:00:31.986322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.893 [2024-06-09 23:00:31.994242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.893 [2024-06-09 23:00:31.994260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.893 [2024-06-09 23:00:32.005636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.893 [2024-06-09 23:00:32.005655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.893 [2024-06-09 23:00:32.013961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.893 [2024-06-09 23:00:32.013980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.893 [2024-06-09 23:00:32.023211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.893 [2024-06-09 23:00:32.023230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.893 [2024-06-09 23:00:32.032439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.893 [2024-06-09 23:00:32.032459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.893 [2024-06-09 23:00:32.041554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.893 [2024-06-09 23:00:32.041573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.893 [2024-06-09 23:00:32.050809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.893 [2024-06-09 23:00:32.050828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.893 [2024-06-09 23:00:32.059892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.893 [2024-06-09 23:00:32.059911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:03.893 [2024-06-09 23:00:32.068531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:03.893 [2024-06-09 23:00:32.068550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.153 [2024-06-09 23:00:32.078213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.078231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.086824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.086842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.098215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.098233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.106052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.106070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.117670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.117688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.125957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.125976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.135645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.135663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.147415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.147434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.157446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.157464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.165263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.165281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.176712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.176730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.185138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.185156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.197296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.197314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.207328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.207346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.215144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.215162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.226459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.226478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.234653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.234671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.245618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.245636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.254037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.254056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.263345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.263363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.272471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.272489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.281284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.281302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.290738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.290757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.300126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.300144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.308779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.308797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.318240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.318258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.154 [2024-06-09 23:00:32.326989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.154 [2024-06-09 23:00:32.327007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.415 [2024-06-09 23:00:32.336408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.415 [2024-06-09 23:00:32.336427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.415 [2024-06-09 23:00:32.345150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.415 [2024-06-09 23:00:32.345173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.415 [2024-06-09 23:00:32.354220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.415 [2024-06-09 23:00:32.354238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.415 [2024-06-09 23:00:32.363720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.415 [2024-06-09 23:00:32.363739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.415 [2024-06-09 23:00:32.372895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.415 [2024-06-09 23:00:32.372913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.415 [2024-06-09 23:00:32.382021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.415 [2024-06-09 23:00:32.382039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.415 [2024-06-09 23:00:32.391136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.415 [2024-06-09 23:00:32.391154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.415 [2024-06-09 23:00:32.399867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.415 [2024-06-09 23:00:32.399886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.415 [2024-06-09 23:00:32.409188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.415 [2024-06-09 23:00:32.409207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.415 [2024-06-09 23:00:32.418065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.415 [2024-06-09 23:00:32.418083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.427589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.427608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.436810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.436828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.446173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.446192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.455610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.455628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.464839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.464858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.474170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.474188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.483432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.483450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.492097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.492115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.501571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.501589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.510893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.510911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.519500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.519522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.528672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.528690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.537471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.537490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.546498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.546516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.555541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.555559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.564624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.564642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.573754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.573773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.582635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.582652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.416 [2024-06-09 23:00:32.591288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.416 [2024-06-09 23:00:32.591307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.676 [2024-06-09 23:00:32.600932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.676 [2024-06-09 23:00:32.600951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.676 [2024-06-09 23:00:32.609646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.676 [2024-06-09 23:00:32.609665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.676 [2024-06-09 23:00:32.620799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.676 [2024-06-09 23:00:32.620818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.676 [2024-06-09 23:00:32.629103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.676 [2024-06-09 23:00:32.629121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.638595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.638612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.647662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.647679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.656623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.656641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.665659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.665677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.673704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.673722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.683533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.683550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.692425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.692447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.702052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.702070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.710960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.710977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.721819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.721837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.729599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.729617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.740824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.740842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.750954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.750972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.758772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.758789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.770117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.770135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.778429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.778447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.789424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.789442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.799649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.799667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.807283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.807301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.819029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.819047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.827435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.827453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.838414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.838432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.677 [2024-06-09 23:00:32.848384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.677 [2024-06-09 23:00:32.848406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.856331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.856349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.867245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.867263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.875188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.875213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.886449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.886466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.894227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.894245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.905626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.905645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.913526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.913543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.923185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.923202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.934167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.934185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.941694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.941712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.953426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.953444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.961476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.961494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.971044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.971062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.980025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.980044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.988668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.988686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:32.997920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:32.997938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:33.007100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:33.007118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:33.015447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:33.015464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:33.024773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:33.024791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:33.033882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:33.033899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:33.042587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:33.042604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:33.052619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:33.052641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:33.063857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:33.063875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:33.073899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:33.073917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:33.081555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:33.081573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:33.093123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:33.093141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:33.101718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:33.101736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.938 [2024-06-09 23:00:33.110731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.938 [2024-06-09 23:00:33.110748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.119983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.120001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.128938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.128955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.137949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.137967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.147120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.147138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.155652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.155670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.165216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.165233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.173707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.173725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.183170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.183188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.192070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.192087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.201196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.201214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.209700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.209718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.218877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.218895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.227791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.227809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.237265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.237282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.246127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.246145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.255053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.255071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.264236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.264254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.273575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.273593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.282584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.282601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.291718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.291736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.300447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.300464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.309876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.309894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.318790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.318808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.327414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.327432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.336550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.336568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.345344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.345362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.354292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.354310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.363054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.363072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.200 [2024-06-09 23:00:33.372365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.200 [2024-06-09 23:00:33.372382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.461 [2024-06-09 23:00:33.381323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.461 [2024-06-09 23:00:33.381341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.461 [2024-06-09 23:00:33.390301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.461 [2024-06-09 23:00:33.390318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.461 [2024-06-09 23:00:33.399176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.461 [2024-06-09 23:00:33.399194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.461 [2024-06-09 23:00:33.408376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.461 [2024-06-09 23:00:33.408394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.461 [2024-06-09 23:00:33.417667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.461 [2024-06-09 23:00:33.417685] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.426721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.426738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.435564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.435582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.445021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.445039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.454026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.454044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.462676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.462694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.472405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.472423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.481460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.481478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.492461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.492479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.502337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.502354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.510379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.510397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.521532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.521549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.529754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.529772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.538971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.538989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.548167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.548185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.557031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.557049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.566135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.566153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.575019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.575037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.584240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.584259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.592713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.592731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.602029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.602047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.611399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.611421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.620710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.620728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.629856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.629874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.462 [2024-06-09 23:00:33.638641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.462 [2024-06-09 23:00:33.638660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.647697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.647715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.656944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.656963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.665682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.665700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.674988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.675007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.683786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.683804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.693079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.693097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.701795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.701814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.710590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.710609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.720185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.720203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.728650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.728666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.740597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.740619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.748970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.748989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.758547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.758565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.767120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.767138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.776311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.776328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.784838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.784857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.794049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.794067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.802746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.802764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.812319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.812337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.821037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.821055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.832434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.832453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.840619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.840638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.849979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.849997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.858497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.858515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.867739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.867757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.876648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.876666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.885554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.885572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.723 [2024-06-09 23:00:33.895152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.723 [2024-06-09 23:00:33.895171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:33.904409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:33.904427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:33.915257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:33.915280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:33.925242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:33.925260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:33.933041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:33.933059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:33.944274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:33.944292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:33.952444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:33.952463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:33.961827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:33.961845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:33.972718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:33.972740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:33.980429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:33.980447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:33.991673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:33.991692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:33.999957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:33.999975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:34.009385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:34.009407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:34.018385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:34.018410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:34.027518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:34.027536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:34.036062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:34.036080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:34.045468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:34.045486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:34.054553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.985 [2024-06-09 23:00:34.054571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.985 [2024-06-09 23:00:34.063242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.986 [2024-06-09 23:00:34.063260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.986 [2024-06-09 23:00:34.072646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.986 [2024-06-09 23:00:34.072664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.986 [2024-06-09 23:00:34.081730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.986 [2024-06-09 23:00:34.081748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.986 [2024-06-09 23:00:34.090212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.986 [2024-06-09 23:00:34.090234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.986 [2024-06-09 23:00:34.099723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.986 [2024-06-09 23:00:34.099741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.986 [2024-06-09 23:00:34.108673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.986 [2024-06-09 23:00:34.108694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.986 [2024-06-09 23:00:34.117901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.986 [2024-06-09 23:00:34.117919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.986 [2024-06-09 23:00:34.126361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.986 [2024-06-09 23:00:34.126379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.986 [2024-06-09 23:00:34.136007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.986 [2024-06-09 23:00:34.136025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.986 [2024-06-09 23:00:34.144431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.986 [2024-06-09 23:00:34.144449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.986 [2024-06-09 23:00:34.153809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.986 [2024-06-09 23:00:34.153827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.986 [2024-06-09 23:00:34.162809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.986 [2024-06-09 23:00:34.162828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.172077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.172095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.181185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.181203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.190427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.190445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.199737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.199756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.208625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.208643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.217900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.217918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.227031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.227049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.235656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.235674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.245188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.245207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.254189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.254207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.263066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.263088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.272325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.272344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.281497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.281515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.290283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.290301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.299781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.299799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.308257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.308276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.318069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.318087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.326902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.326920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.335615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.335633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.345231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.345249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.353850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.353867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.364965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.364983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.372864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.372881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.384296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.384314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.392051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.392069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.403354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.403373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.411498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.411516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.247 [2024-06-09 23:00:34.420759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.247 [2024-06-09 23:00:34.420777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.507 [2024-06-09 23:00:34.430015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.507 [2024-06-09 23:00:34.430033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.507 [2024-06-09 23:00:34.438361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.507 [2024-06-09 23:00:34.438384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.507 [2024-06-09 23:00:34.447663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.507 [2024-06-09 23:00:34.447681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.507 [2024-06-09 23:00:34.456616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.507 [2024-06-09 23:00:34.456634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.507 [2024-06-09 23:00:34.465531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.507 [2024-06-09 23:00:34.465549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.507 [2024-06-09 23:00:34.474516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.507 [2024-06-09 23:00:34.474533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.507 [2024-06-09 23:00:34.483807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.507 [2024-06-09 23:00:34.483825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.507 [2024-06-09 23:00:34.492148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.507 [2024-06-09 23:00:34.492166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.507 [2024-06-09 23:00:34.501889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.507 [2024-06-09 23:00:34.501907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.507 [2024-06-09 23:00:34.510893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.510911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.519703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.519721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.529308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.529325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.538361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.538379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.549114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.549131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.558959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.558977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.566766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.566784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.577879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.577898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.587900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.587918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.595759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.595777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.607076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.607094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.615213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.615231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.624704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.624722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.633765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.633783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.642195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.642213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.651392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.651414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.660551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.660569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.669541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.669559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.508 [2024-06-09 23:00:34.678524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.508 [2024-06-09 23:00:34.678541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.687169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.687187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.696240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.696258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.705006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.705023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.714107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.714125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.723217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.723235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.732621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.732639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.742131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.742149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.751138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.751156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.760228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.760245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.769174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.769191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.778026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.778044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.787310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.787328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.796372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.796390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.805257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.805275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.814246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.814264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.823538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.823556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.832328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.832346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.841489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.841507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.850386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.850407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.859189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.859207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.868921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.868939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.880971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.880990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.891360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.891378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.899155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.899173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.910601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.910620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.918240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.918258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.929526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.929544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.937796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.937814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.769 [2024-06-09 23:00:34.947225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.769 [2024-06-09 23:00:34.947242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.031 [2024-06-09 23:00:34.957998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.031 [2024-06-09 23:00:34.958017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.031 [2024-06-09 23:00:34.967941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.031 [2024-06-09 23:00:34.967958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.031 [2024-06-09 23:00:34.975686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.031 [2024-06-09 23:00:34.975704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.031 [2024-06-09 23:00:34.987164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.031 [2024-06-09 23:00:34.987182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.031 [2024-06-09 23:00:34.995537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.031 [2024-06-09 23:00:34.995554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.031 [2024-06-09 23:00:35.006636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.031 [2024-06-09 23:00:35.006654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.031 [2024-06-09 23:00:35.016267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.031 [2024-06-09 23:00:35.016285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.031 [2024-06-09 23:00:35.024053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.031 [2024-06-09 23:00:35.024070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.031 [2024-06-09 23:00:35.035518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.031 [2024-06-09 23:00:35.035536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.031 [2024-06-09 23:00:35.043993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.031 [2024-06-09 23:00:35.044011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.031 [2024-06-09 23:00:35.053395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.031 [2024-06-09 23:00:35.053417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.031 [2024-06-09 23:00:35.062221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.031 [2024-06-09 23:00:35.062238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.031 [2024-06-09 23:00:35.070725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.031 [2024-06-09 23:00:35.070743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.031 [2024-06-09 23:00:35.080287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.031 [2024-06-09 23:00:35.080304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.031 [2024-06-09 23:00:35.089485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.031 [2024-06-09 23:00:35.089503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.032 [2024-06-09 23:00:35.098213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.032 [2024-06-09 23:00:35.098231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.032 [2024-06-09 23:00:35.107482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.032 [2024-06-09 23:00:35.107500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.032 [2024-06-09 23:00:35.116747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.032 [2024-06-09 23:00:35.116765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.032 [2024-06-09 23:00:35.125439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.032 [2024-06-09 23:00:35.125457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.032 [2024-06-09 23:00:35.134682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.032 [2024-06-09 23:00:35.134704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.032 [2024-06-09 23:00:35.144098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.032 [2024-06-09 23:00:35.144116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.032 [2024-06-09 23:00:35.153005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.032 [2024-06-09 23:00:35.153023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.032 [2024-06-09 23:00:35.162395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.032 [2024-06-09 23:00:35.162419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.032 [2024-06-09 23:00:35.171602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.032 [2024-06-09 23:00:35.171621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.032 [2024-06-09 23:00:35.180947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.032 [2024-06-09 23:00:35.180965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.032 [2024-06-09 23:00:35.190221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.032 [2024-06-09 23:00:35.190239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.032 [2024-06-09 23:00:35.199214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.032 [2024-06-09 23:00:35.199233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.032 [2024-06-09 23:00:35.208273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.032 [2024-06-09 23:00:35.208291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.293 [2024-06-09 23:00:35.217344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.293 [2024-06-09 23:00:35.217362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.293 [2024-06-09 23:00:35.226692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.293 [2024-06-09 23:00:35.226710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.293 [2024-06-09 23:00:35.235895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.293 [2024-06-09 23:00:35.235913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.293 [2024-06-09 23:00:35.244748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.293 [2024-06-09 23:00:35.244766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.293 [2024-06-09 23:00:35.253380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.293 [2024-06-09 23:00:35.253397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.293 [2024-06-09 23:00:35.263045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.263062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.273913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.273931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.282051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.282068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.293820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.293837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.305019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.305037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.314854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.314876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.322814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.322831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.334423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.334441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.342873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.342890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.352223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.352240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.361245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.361263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.369407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.369425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.379083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.379102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.387515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.387533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.397179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.397198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.406258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.406277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.414897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.414915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.424182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.424200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.433317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.433335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.442136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.442155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.450817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.450835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.459924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.459943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.294 [2024-06-09 23:00:35.469080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.294 [2024-06-09 23:00:35.469097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.478243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.478262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.487489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.487515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.495899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.495918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.505158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.505176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.513851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.513870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.523205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.523223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.532308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.532326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.541270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.541288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.549877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.549895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.559322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.559340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.568531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.568549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.577159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.577177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.586432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.586450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.595566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.595584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.604782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.604801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.613944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.613962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.622433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.622451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.631792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.631810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.640987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.641005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.650024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.650042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.659082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.659104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.668317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.668336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.676922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.676940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.686353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.686372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.695187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.695205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.704585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.704604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.713350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.713368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.721745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.721764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.556 [2024-06-09 23:00:35.731007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.556 [2024-06-09 23:00:35.731026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.740388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.740412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.749815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.749833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.758593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.758611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.767994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.768012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.776528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.776546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.785746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.785764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.794919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.794937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.813570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.813588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.824447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.824466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.832144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.832163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.843132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.843155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.851193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.851211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.862636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.862655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.870819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.870837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.880466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.880484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.891475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.891494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.899262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.899281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.910188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.910207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.920482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.920500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.930072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.930090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.937770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.937789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.948852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.948871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.957009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.957027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.966273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.966291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.975316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.975335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.984635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.984653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.819 [2024-06-09 23:00:35.993498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.819 [2024-06-09 23:00:35.993516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.002784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.002802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.011978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.011996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.020897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.020915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.030664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.030682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.041935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.041953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.049952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.049970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.060841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.060860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.068775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.068793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.080062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.080081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.088384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.088408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.097630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.097648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.106461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.106479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.116022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.116040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.125146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.125164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.136272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.136290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.144297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.144316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.155884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.155902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.164325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.164343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.173513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.173531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.182681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.182698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.191554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.191572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.200460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.200486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.209662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.209680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.218825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.218843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.227940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.227958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.237028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.237046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.081 [2024-06-09 23:00:36.245765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.081 [2024-06-09 23:00:36.245784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.082 [2024-06-09 23:00:36.255045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.082 [2024-06-09 23:00:36.255063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.264406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.264424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.273441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.273459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.282743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.282761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.291753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.291770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.300941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.300959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.309990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.310009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.319192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.319210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.328224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.328243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.337108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.337127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.345827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.345845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.354428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.354446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.363600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.363617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.372759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.372777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.381383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.381406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.390867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.390885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.399844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.399863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.408477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.408502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.417898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.417916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.426340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.426357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.436068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.436086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.445171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.445188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.456384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.456408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.464653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.464671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.474048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.474066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.482652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.482669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.491750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.491768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.500783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.500801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.510070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.510088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.343 [2024-06-09 23:00:36.519134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.343 [2024-06-09 23:00:36.519151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.605 [2024-06-09 23:00:36.528002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.605 [2024-06-09 23:00:36.528020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.605 [2024-06-09 23:00:36.537635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.605 [2024-06-09 23:00:36.537656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.605 [2024-06-09 23:00:36.548906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.605 [2024-06-09 23:00:36.548924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.605 [2024-06-09 23:00:36.558778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.605 [2024-06-09 23:00:36.558796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.605 [2024-06-09 23:00:36.568666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.605 [2024-06-09 23:00:36.568683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.605 [2024-06-09 23:00:36.576765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.605 [2024-06-09 23:00:36.576783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.605 [2024-06-09 23:00:36.587607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.605 [2024-06-09 23:00:36.587624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.597839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.597857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.605331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.605348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.616644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.616662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.626084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.626102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.633714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.633731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.644925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.644943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.653440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.653458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.662811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.662829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.672045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.672062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.681117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.681135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.690288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.690306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.698887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.698904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.708421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.708439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.716774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.716795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.726484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.726502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.737467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.737485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.745224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.745241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.756929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.756947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.767946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.767964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.606 [2024-06-09 23:00:36.775906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.606 [2024-06-09 23:00:36.775924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.787058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.787076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.795309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.795327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.804765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.804783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.813234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.813251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.822494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.822512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.831461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.831479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.840771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.840789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.850021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.850040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.858943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.858961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.867931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.867949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.876650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.876668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.885960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.885978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.895188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.895211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.903900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.903918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.913380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.913398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.922624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.922642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.931152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.931170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.940466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.940483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.949590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.949608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 [2024-06-09 23:00:36.955478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.867 [2024-06-09 23:00:36.955495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.867 00:18:08.868 Latency(us) 00:18:08.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.868 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:08.868 Nvme1n1 : 5.01 14004.18 109.41 0.00 0.00 9131.23 3426.99 28617.39 00:18:08.868 =================================================================================================================== 00:18:08.868 Total : 14004.18 109.41 0.00 0.00 9131.23 3426.99 28617.39 00:18:08.868 [2024-06-09 23:00:36.963498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.868 [2024-06-09 23:00:36.963513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.868 [2024-06-09 23:00:36.971516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.868 [2024-06-09 23:00:36.971530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.868 [2024-06-09 23:00:36.979539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.868 [2024-06-09 23:00:36.979552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.868 [2024-06-09 23:00:36.987558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.868 [2024-06-09 23:00:36.987570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.868 [2024-06-09 23:00:36.995581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.868 [2024-06-09 23:00:36.995592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.868 [2024-06-09 23:00:37.003601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.868 [2024-06-09 23:00:37.003612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.868 [2024-06-09 23:00:37.011619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.868 [2024-06-09 23:00:37.011630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.868 [2024-06-09 23:00:37.019640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.868 [2024-06-09 23:00:37.019650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.868 [2024-06-09 23:00:37.027662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.868 [2024-06-09 23:00:37.027677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.868 [2024-06-09 23:00:37.035686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.868 [2024-06-09 23:00:37.035696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.868 [2024-06-09 23:00:37.043708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.868 [2024-06-09 23:00:37.043720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.130 [2024-06-09 23:00:37.051730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.130 [2024-06-09 23:00:37.051741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.130 [2024-06-09 23:00:37.059753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.130 [2024-06-09 23:00:37.059763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.130 [2024-06-09 23:00:37.067775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.130 [2024-06-09 23:00:37.067788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.130 [2024-06-09 23:00:37.075796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.130 [2024-06-09 23:00:37.075806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.130 [2024-06-09 23:00:37.083818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.130 [2024-06-09 23:00:37.083829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4087425) - No such process 00:18:09.130 23:00:37 -- target/zcopy.sh@49 -- # wait 4087425 00:18:09.130 23:00:37 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:09.130 23:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:09.130 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:18:09.130 23:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:09.130 23:00:37 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:09.130 23:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:09.130 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:18:09.130 delay0 00:18:09.130 23:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:09.130 23:00:37 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:09.130 23:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:09.130 23:00:37 -- common/autotest_common.sh@10 -- # set +x 00:18:09.130 23:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:09.130 23:00:37 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:09.130 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.130 [2024-06-09 23:00:37.253537] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:15.722 Initializing NVMe Controllers 00:18:15.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:15.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:15.722 Initialization complete. Launching workers. 00:18:15.722 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 108 00:18:15.722 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 393, failed to submit 35 00:18:15.722 success 183, unsuccess 210, failed 0 00:18:15.722 23:00:43 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:15.722 23:00:43 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:15.722 23:00:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:15.722 23:00:43 -- nvmf/common.sh@116 -- # sync 00:18:15.722 23:00:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:15.722 23:00:43 -- nvmf/common.sh@119 -- # set +e 00:18:15.722 23:00:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:15.722 23:00:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:15.722 rmmod nvme_tcp 00:18:15.722 rmmod nvme_fabrics 00:18:15.722 rmmod nvme_keyring 00:18:15.722 23:00:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:15.722 23:00:43 -- nvmf/common.sh@123 -- # set -e 00:18:15.722 23:00:43 -- nvmf/common.sh@124 -- # return 0 00:18:15.722 23:00:43 -- nvmf/common.sh@477 -- # '[' -n 4085223 ']' 00:18:15.722 23:00:43 -- nvmf/common.sh@478 -- # killprocess 4085223 00:18:15.722 23:00:43 -- common/autotest_common.sh@926 -- # '[' -z 4085223 ']' 00:18:15.722 23:00:43 -- common/autotest_common.sh@930 -- # kill -0 4085223 00:18:15.722 23:00:43 -- common/autotest_common.sh@931 -- # uname 00:18:15.722 23:00:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:15.722 23:00:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4085223 00:18:15.722 23:00:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:15.722 23:00:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:15.722 23:00:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4085223' 00:18:15.722 killing process with pid 4085223 00:18:15.722 23:00:43 -- common/autotest_common.sh@945 -- # kill 4085223 00:18:15.722 23:00:43 -- common/autotest_common.sh@950 -- # wait 4085223 00:18:15.722 23:00:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:15.722 23:00:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:15.722 23:00:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:15.722 23:00:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.722 23:00:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:15.722 23:00:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.722 23:00:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.722 23:00:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.638 23:00:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:17.638 00:18:17.638 real 0m32.387s 00:18:17.638 user 0m43.711s 00:18:17.638 sys 0m9.391s 00:18:17.638 23:00:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:17.638 23:00:45 -- common/autotest_common.sh@10 -- # set +x 00:18:17.638 ************************************ 00:18:17.638 END TEST nvmf_zcopy 00:18:17.638 ************************************ 00:18:17.638 23:00:45 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:17.638 23:00:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:17.638 23:00:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:17.638 23:00:45 -- common/autotest_common.sh@10 -- # set +x 00:18:17.638 ************************************ 00:18:17.638 START TEST nvmf_nmic 00:18:17.638 ************************************ 00:18:17.638 23:00:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:17.899 * Looking for test storage... 00:18:17.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:17.899 23:00:45 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:17.899 23:00:45 -- nvmf/common.sh@7 -- # uname -s 00:18:17.899 23:00:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.899 23:00:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.899 23:00:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.899 23:00:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.899 23:00:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.899 23:00:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.899 23:00:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.899 23:00:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.899 23:00:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.899 23:00:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.899 23:00:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.899 23:00:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.899 23:00:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.899 23:00:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.899 23:00:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:17.899 23:00:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:17.899 23:00:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.899 23:00:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.899 23:00:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.899 23:00:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.899 23:00:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.899 23:00:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.899 23:00:45 -- paths/export.sh@5 -- # export PATH 00:18:17.899 23:00:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.899 23:00:45 -- nvmf/common.sh@46 -- # : 0 00:18:17.899 23:00:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:17.899 23:00:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:17.899 23:00:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:17.899 23:00:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.899 23:00:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.899 23:00:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:17.899 23:00:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:17.899 23:00:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:17.899 23:00:45 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:17.899 23:00:45 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:17.899 23:00:45 -- target/nmic.sh@14 -- # nvmftestinit 00:18:17.899 23:00:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:17.899 23:00:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.899 23:00:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:17.899 23:00:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:17.899 23:00:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:17.899 23:00:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.899 23:00:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.899 23:00:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.899 23:00:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:17.899 23:00:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:17.899 23:00:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:17.899 23:00:45 -- common/autotest_common.sh@10 -- # set +x 00:18:26.055 23:00:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:26.055 23:00:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:26.055 23:00:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:26.055 23:00:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:26.055 23:00:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:26.055 23:00:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:26.056 23:00:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:26.056 23:00:52 -- nvmf/common.sh@294 -- # net_devs=() 00:18:26.056 23:00:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:26.056 23:00:52 -- nvmf/common.sh@295 -- # e810=() 00:18:26.056 23:00:52 -- nvmf/common.sh@295 -- # local -ga e810 00:18:26.056 23:00:52 -- nvmf/common.sh@296 -- # x722=() 00:18:26.056 23:00:52 -- nvmf/common.sh@296 -- # local -ga x722 00:18:26.056 23:00:52 -- nvmf/common.sh@297 -- # mlx=() 00:18:26.056 23:00:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:26.056 23:00:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:26.056 23:00:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:26.056 23:00:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:26.056 23:00:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:26.056 23:00:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:26.056 23:00:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:26.056 23:00:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:26.056 23:00:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:26.056 23:00:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:26.056 23:00:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:26.056 23:00:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:26.056 23:00:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:26.056 23:00:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:26.056 23:00:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:26.056 23:00:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:26.056 23:00:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:26.056 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:26.056 23:00:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:26.056 23:00:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:26.056 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:26.056 23:00:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:26.056 23:00:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:26.056 23:00:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.056 23:00:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:26.056 23:00:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.056 23:00:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:26.056 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:26.056 23:00:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.056 23:00:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:26.056 23:00:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.056 23:00:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:26.056 23:00:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.056 23:00:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:26.056 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:26.056 23:00:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.056 23:00:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:26.056 23:00:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:26.056 23:00:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:26.056 23:00:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:26.056 23:00:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.056 23:00:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.056 23:00:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:26.056 23:00:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:26.056 23:00:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:26.056 23:00:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:26.056 23:00:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:26.056 23:00:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:26.056 23:00:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.056 23:00:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:26.056 23:00:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:26.056 23:00:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:26.056 23:00:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:26.056 23:00:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:26.056 23:00:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:26.056 23:00:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:26.056 23:00:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:26.056 23:00:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:26.056 23:00:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:26.056 23:00:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:26.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:18:26.056 00:18:26.056 --- 10.0.0.2 ping statistics --- 00:18:26.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.056 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:18:26.056 23:00:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:26.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.413 ms 00:18:26.056 00:18:26.056 --- 10.0.0.1 ping statistics --- 00:18:26.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.056 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:18:26.056 23:00:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.056 23:00:53 -- nvmf/common.sh@410 -- # return 0 00:18:26.056 23:00:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:26.056 23:00:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.056 23:00:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:26.056 23:00:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:26.056 23:00:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.056 23:00:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:26.056 23:00:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:26.056 23:00:53 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:26.056 23:00:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:26.056 23:00:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:26.056 23:00:53 -- common/autotest_common.sh@10 -- # set +x 00:18:26.056 23:00:53 -- nvmf/common.sh@469 -- # nvmfpid=4093859 00:18:26.056 23:00:53 -- nvmf/common.sh@470 -- # waitforlisten 4093859 00:18:26.056 23:00:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:26.056 23:00:53 -- common/autotest_common.sh@819 -- # '[' -z 4093859 ']' 00:18:26.056 23:00:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.056 23:00:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:26.056 23:00:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.056 23:00:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:26.056 23:00:53 -- common/autotest_common.sh@10 -- # set +x 00:18:26.056 [2024-06-09 23:00:53.127505] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:26.056 [2024-06-09 23:00:53.127571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.056 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.056 [2024-06-09 23:00:53.197504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:26.056 [2024-06-09 23:00:53.272058] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:26.056 [2024-06-09 23:00:53.272201] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.056 [2024-06-09 23:00:53.272210] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.056 [2024-06-09 23:00:53.272219] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.056 [2024-06-09 23:00:53.272351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.056 [2024-06-09 23:00:53.272488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.056 [2024-06-09 23:00:53.272600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.056 [2024-06-09 23:00:53.272601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:26.056 23:00:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:26.056 23:00:53 -- common/autotest_common.sh@852 -- # return 0 00:18:26.056 23:00:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:26.056 23:00:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:26.056 23:00:53 -- common/autotest_common.sh@10 -- # set +x 00:18:26.056 23:00:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.056 23:00:53 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:26.056 23:00:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:26.056 23:00:53 -- common/autotest_common.sh@10 -- # set +x 00:18:26.056 [2024-06-09 23:00:53.951628] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.056 23:00:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:26.056 23:00:53 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:26.056 23:00:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:26.056 23:00:53 -- common/autotest_common.sh@10 -- # set +x 00:18:26.057 Malloc0 00:18:26.057 23:00:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:26.057 23:00:53 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:26.057 23:00:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:26.057 23:00:53 -- common/autotest_common.sh@10 -- # set +x 00:18:26.057 23:00:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:26.057 23:00:53 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:26.057 23:00:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:26.057 23:00:53 -- common/autotest_common.sh@10 -- # set +x 00:18:26.057 23:00:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:26.057 23:00:54 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:26.057 23:00:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:26.057 23:00:54 -- common/autotest_common.sh@10 -- # set +x 00:18:26.057 [2024-06-09 23:00:54.011035] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.057 23:00:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:26.057 23:00:54 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:26.057 test case1: single bdev can't be used in multiple subsystems 00:18:26.057 23:00:54 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:26.057 23:00:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:26.057 23:00:54 -- common/autotest_common.sh@10 -- # set +x 00:18:26.057 23:00:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:26.057 23:00:54 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:26.057 23:00:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:26.057 23:00:54 -- common/autotest_common.sh@10 -- # set +x 00:18:26.057 23:00:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:26.057 23:00:54 -- target/nmic.sh@28 -- # nmic_status=0 00:18:26.057 23:00:54 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:26.057 23:00:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:26.057 23:00:54 -- common/autotest_common.sh@10 -- # set +x 00:18:26.057 [2024-06-09 23:00:54.047000] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:26.057 [2024-06-09 23:00:54.047021] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:26.057 [2024-06-09 23:00:54.047033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.057 request: 00:18:26.057 { 00:18:26.057 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:26.057 "namespace": { 00:18:26.057 "bdev_name": "Malloc0" 00:18:26.057 }, 00:18:26.057 "method": "nvmf_subsystem_add_ns", 00:18:26.057 "req_id": 1 00:18:26.057 } 00:18:26.057 Got JSON-RPC error response 00:18:26.057 response: 00:18:26.057 { 00:18:26.057 "code": -32602, 00:18:26.057 "message": "Invalid parameters" 00:18:26.057 } 00:18:26.057 23:00:54 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:18:26.057 23:00:54 -- target/nmic.sh@29 -- # nmic_status=1 00:18:26.057 23:00:54 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:26.057 23:00:54 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:26.057 Adding namespace failed - expected result. 00:18:26.057 23:00:54 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:26.057 test case2: host connect to nvmf target in multiple paths 00:18:26.057 23:00:54 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:26.057 23:00:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:26.057 23:00:54 -- common/autotest_common.sh@10 -- # set +x 00:18:26.057 [2024-06-09 23:00:54.059133] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:26.057 23:00:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:26.057 23:00:54 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:27.483 23:00:55 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:29.400 23:00:57 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:29.400 23:00:57 -- common/autotest_common.sh@1177 -- # local i=0 00:18:29.400 23:00:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:29.400 23:00:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:29.400 23:00:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:31.336 23:00:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:31.336 23:00:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:31.336 23:00:59 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:31.336 23:00:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:31.336 23:00:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:31.336 23:00:59 -- common/autotest_common.sh@1187 -- # return 0 00:18:31.336 23:00:59 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:31.336 [global] 00:18:31.336 thread=1 00:18:31.336 invalidate=1 00:18:31.336 rw=write 00:18:31.336 time_based=1 00:18:31.336 runtime=1 00:18:31.336 ioengine=libaio 00:18:31.336 direct=1 00:18:31.336 bs=4096 00:18:31.336 iodepth=1 00:18:31.336 norandommap=0 00:18:31.336 numjobs=1 00:18:31.336 00:18:31.336 verify_dump=1 00:18:31.336 verify_backlog=512 00:18:31.336 verify_state_save=0 00:18:31.336 do_verify=1 00:18:31.336 verify=crc32c-intel 00:18:31.336 [job0] 00:18:31.336 filename=/dev/nvme0n1 00:18:31.336 Could not set queue depth (nvme0n1) 00:18:31.596 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:31.596 fio-3.35 00:18:31.596 Starting 1 thread 00:18:32.540 00:18:32.540 job0: (groupid=0, jobs=1): err= 0: pid=4095360: Sun Jun 9 23:01:00 2024 00:18:32.540 read: IOPS=11, BW=48.0KiB/s (49.1kB/s)(48.0KiB/1001msec) 00:18:32.541 slat (nsec): min=25153, max=26788, avg=25655.92, stdev=472.39 00:18:32.541 clat (usec): min=41904, max=42926, avg=42053.31, stdev=279.75 00:18:32.541 lat (usec): min=41929, max=42953, avg=42078.97, stdev=279.97 00:18:32.541 clat percentiles (usec): 00:18:32.541 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:18:32.541 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:32.541 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:18:32.541 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:32.541 | 99.99th=[42730] 00:18:32.541 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:32.541 slat (nsec): min=9819, max=71530, avg=31049.58, stdev=5423.12 00:18:32.541 clat (usec): min=582, max=3110, avg=922.30, stdev=149.98 00:18:32.541 lat (usec): min=611, max=3142, avg=953.35, stdev=151.10 00:18:32.541 clat percentiles (usec): 00:18:32.541 | 1.00th=[ 611], 5.00th=[ 676], 10.00th=[ 742], 20.00th=[ 832], 00:18:32.541 | 30.00th=[ 898], 40.00th=[ 930], 50.00th=[ 947], 60.00th=[ 963], 00:18:32.541 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1037], 00:18:32.541 | 99.00th=[ 1139], 99.50th=[ 1319], 99.90th=[ 3097], 99.95th=[ 3097], 00:18:32.541 | 99.99th=[ 3097] 00:18:32.541 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:32.541 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:32.541 lat (usec) : 750=10.69%, 1000=64.50% 00:18:32.541 lat (msec) : 2=22.33%, 4=0.19%, 50=2.29% 00:18:32.541 cpu : usr=0.70%, sys=1.90%, ctx=526, majf=0, minf=1 00:18:32.541 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:32.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.541 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.541 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:32.541 00:18:32.541 Run status group 0 (all jobs): 00:18:32.541 READ: bw=48.0KiB/s (49.1kB/s), 48.0KiB/s-48.0KiB/s (49.1kB/s-49.1kB/s), io=48.0KiB (49.2kB), run=1001-1001msec 00:18:32.541 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:18:32.541 00:18:32.541 Disk stats (read/write): 00:18:32.541 nvme0n1: ios=59/512, merge=0/0, ticks=830/475, in_queue=1305, util=99.60% 00:18:32.541 23:01:00 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:32.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:32.802 23:01:00 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:32.802 23:01:00 -- common/autotest_common.sh@1198 -- # local i=0 00:18:32.802 23:01:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:32.802 23:01:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:32.802 23:01:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:32.802 23:01:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:32.802 23:01:00 -- common/autotest_common.sh@1210 -- # return 0 00:18:32.802 23:01:00 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:32.802 23:01:00 -- target/nmic.sh@53 -- # nvmftestfini 00:18:32.802 23:01:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:32.802 23:01:00 -- nvmf/common.sh@116 -- # sync 00:18:32.802 23:01:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:32.802 23:01:00 -- nvmf/common.sh@119 -- # set +e 00:18:32.802 23:01:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:32.802 23:01:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:32.802 rmmod nvme_tcp 00:18:32.802 rmmod nvme_fabrics 00:18:32.802 rmmod nvme_keyring 00:18:32.802 23:01:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:32.802 23:01:00 -- nvmf/common.sh@123 -- # set -e 00:18:32.802 23:01:00 -- nvmf/common.sh@124 -- # return 0 00:18:32.802 23:01:00 -- nvmf/common.sh@477 -- # '[' -n 4093859 ']' 00:18:32.802 23:01:00 -- nvmf/common.sh@478 -- # killprocess 4093859 00:18:32.802 23:01:00 -- common/autotest_common.sh@926 -- # '[' -z 4093859 ']' 00:18:32.802 23:01:00 -- common/autotest_common.sh@930 -- # kill -0 4093859 00:18:32.802 23:01:00 -- common/autotest_common.sh@931 -- # uname 00:18:32.802 23:01:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:32.802 23:01:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4093859 00:18:33.063 23:01:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:33.063 23:01:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:33.063 23:01:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4093859' 00:18:33.063 killing process with pid 4093859 00:18:33.063 23:01:00 -- common/autotest_common.sh@945 -- # kill 4093859 00:18:33.063 23:01:00 -- common/autotest_common.sh@950 -- # wait 4093859 00:18:33.063 23:01:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:33.063 23:01:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:33.063 23:01:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:33.063 23:01:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.063 23:01:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:33.063 23:01:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.063 23:01:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.063 23:01:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.612 23:01:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:35.612 00:18:35.612 real 0m17.458s 00:18:35.612 user 0m49.869s 00:18:35.612 sys 0m6.044s 00:18:35.612 23:01:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:35.612 23:01:03 -- common/autotest_common.sh@10 -- # set +x 00:18:35.612 ************************************ 00:18:35.612 END TEST nvmf_nmic 00:18:35.612 ************************************ 00:18:35.612 23:01:03 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:35.612 23:01:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:35.612 23:01:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:35.612 23:01:03 -- common/autotest_common.sh@10 -- # set +x 00:18:35.612 ************************************ 00:18:35.612 START TEST nvmf_fio_target 00:18:35.612 ************************************ 00:18:35.612 23:01:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:35.612 * Looking for test storage... 00:18:35.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.612 23:01:03 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.612 23:01:03 -- nvmf/common.sh@7 -- # uname -s 00:18:35.612 23:01:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.612 23:01:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.612 23:01:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.612 23:01:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.612 23:01:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.612 23:01:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.612 23:01:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.612 23:01:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.612 23:01:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.612 23:01:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.612 23:01:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.612 23:01:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.612 23:01:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.612 23:01:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.612 23:01:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.612 23:01:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.612 23:01:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.612 23:01:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.612 23:01:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.612 23:01:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.612 23:01:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.612 23:01:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.612 23:01:03 -- paths/export.sh@5 -- # export PATH 00:18:35.612 23:01:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.612 23:01:03 -- nvmf/common.sh@46 -- # : 0 00:18:35.612 23:01:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:35.612 23:01:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:35.612 23:01:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:35.612 23:01:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.612 23:01:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.612 23:01:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:35.612 23:01:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:35.612 23:01:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:35.612 23:01:03 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:35.612 23:01:03 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:35.612 23:01:03 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.612 23:01:03 -- target/fio.sh@16 -- # nvmftestinit 00:18:35.612 23:01:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:35.612 23:01:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.612 23:01:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:35.612 23:01:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:35.612 23:01:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:35.612 23:01:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.612 23:01:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.612 23:01:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.612 23:01:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:35.612 23:01:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:35.612 23:01:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:35.612 23:01:03 -- common/autotest_common.sh@10 -- # set +x 00:18:42.204 23:01:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:42.204 23:01:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:42.204 23:01:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:42.204 23:01:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:42.204 23:01:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:42.204 23:01:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:42.204 23:01:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:42.204 23:01:09 -- nvmf/common.sh@294 -- # net_devs=() 00:18:42.204 23:01:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:42.204 23:01:09 -- nvmf/common.sh@295 -- # e810=() 00:18:42.204 23:01:09 -- nvmf/common.sh@295 -- # local -ga e810 00:18:42.204 23:01:09 -- nvmf/common.sh@296 -- # x722=() 00:18:42.204 23:01:09 -- nvmf/common.sh@296 -- # local -ga x722 00:18:42.204 23:01:09 -- nvmf/common.sh@297 -- # mlx=() 00:18:42.204 23:01:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:42.204 23:01:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.204 23:01:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.204 23:01:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.204 23:01:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.204 23:01:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.204 23:01:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.204 23:01:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.204 23:01:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.204 23:01:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.204 23:01:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.204 23:01:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.204 23:01:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:42.204 23:01:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:42.204 23:01:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:42.204 23:01:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:42.204 23:01:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:42.204 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:42.204 23:01:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:42.204 23:01:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:42.204 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:42.204 23:01:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:42.204 23:01:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:42.204 23:01:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.204 23:01:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:42.204 23:01:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.204 23:01:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:42.204 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:42.204 23:01:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.204 23:01:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:42.204 23:01:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.204 23:01:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:42.204 23:01:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.204 23:01:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:42.204 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:42.204 23:01:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.204 23:01:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:42.204 23:01:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:42.204 23:01:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:42.204 23:01:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:42.204 23:01:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.204 23:01:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.204 23:01:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:42.204 23:01:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:42.205 23:01:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:42.205 23:01:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:42.205 23:01:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:42.205 23:01:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:42.205 23:01:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.205 23:01:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:42.205 23:01:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:42.205 23:01:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:42.205 23:01:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.205 23:01:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.205 23:01:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.205 23:01:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:42.205 23:01:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.205 23:01:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.205 23:01:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.205 23:01:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:42.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:18:42.205 00:18:42.205 --- 10.0.0.2 ping statistics --- 00:18:42.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.205 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:18:42.205 23:01:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.393 ms 00:18:42.205 00:18:42.205 --- 10.0.0.1 ping statistics --- 00:18:42.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.205 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:18:42.205 23:01:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.205 23:01:10 -- nvmf/common.sh@410 -- # return 0 00:18:42.205 23:01:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:42.205 23:01:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.205 23:01:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:42.205 23:01:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:42.205 23:01:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.205 23:01:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:42.205 23:01:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:42.205 23:01:10 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:42.205 23:01:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:42.205 23:01:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:42.205 23:01:10 -- common/autotest_common.sh@10 -- # set +x 00:18:42.205 23:01:10 -- nvmf/common.sh@469 -- # nvmfpid=4099712 00:18:42.205 23:01:10 -- nvmf/common.sh@470 -- # waitforlisten 4099712 00:18:42.205 23:01:10 -- common/autotest_common.sh@819 -- # '[' -z 4099712 ']' 00:18:42.205 23:01:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.205 23:01:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:42.205 23:01:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.205 23:01:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:42.205 23:01:10 -- common/autotest_common.sh@10 -- # set +x 00:18:42.205 23:01:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:42.205 [2024-06-09 23:01:10.228353] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:42.205 [2024-06-09 23:01:10.228444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.205 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.205 [2024-06-09 23:01:10.298807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.205 [2024-06-09 23:01:10.372748] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:42.205 [2024-06-09 23:01:10.372884] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.205 [2024-06-09 23:01:10.372895] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.205 [2024-06-09 23:01:10.372904] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.205 [2024-06-09 23:01:10.373040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.205 [2024-06-09 23:01:10.373163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.205 [2024-06-09 23:01:10.373323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.205 [2024-06-09 23:01:10.373324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:43.148 23:01:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:43.148 23:01:10 -- common/autotest_common.sh@852 -- # return 0 00:18:43.148 23:01:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:43.148 23:01:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:43.148 23:01:10 -- common/autotest_common.sh@10 -- # set +x 00:18:43.148 23:01:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.148 23:01:11 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:43.148 [2024-06-09 23:01:11.164939] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.148 23:01:11 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:43.409 23:01:11 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:43.409 23:01:11 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:43.409 23:01:11 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:43.409 23:01:11 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:43.670 23:01:11 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:43.670 23:01:11 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:43.931 23:01:11 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:43.931 23:01:11 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:43.932 23:01:12 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:44.192 23:01:12 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:44.192 23:01:12 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:44.453 23:01:12 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:44.453 23:01:12 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:44.453 23:01:12 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:44.454 23:01:12 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:44.715 23:01:12 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:44.715 23:01:12 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:44.715 23:01:12 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:44.976 23:01:13 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:44.976 23:01:13 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:45.236 23:01:13 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.236 [2024-06-09 23:01:13.346304] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.236 23:01:13 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:45.496 23:01:13 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:45.757 23:01:13 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:47.142 23:01:15 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:47.142 23:01:15 -- common/autotest_common.sh@1177 -- # local i=0 00:18:47.142 23:01:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.142 23:01:15 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:18:47.142 23:01:15 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:18:47.142 23:01:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:49.107 23:01:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:49.107 23:01:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:49.107 23:01:17 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:49.369 23:01:17 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:18:49.369 23:01:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.369 23:01:17 -- common/autotest_common.sh@1187 -- # return 0 00:18:49.369 23:01:17 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:49.369 [global] 00:18:49.369 thread=1 00:18:49.369 invalidate=1 00:18:49.369 rw=write 00:18:49.369 time_based=1 00:18:49.369 runtime=1 00:18:49.369 ioengine=libaio 00:18:49.369 direct=1 00:18:49.369 bs=4096 00:18:49.369 iodepth=1 00:18:49.369 norandommap=0 00:18:49.369 numjobs=1 00:18:49.369 00:18:49.369 verify_dump=1 00:18:49.369 verify_backlog=512 00:18:49.369 verify_state_save=0 00:18:49.369 do_verify=1 00:18:49.369 verify=crc32c-intel 00:18:49.369 [job0] 00:18:49.369 filename=/dev/nvme0n1 00:18:49.369 [job1] 00:18:49.369 filename=/dev/nvme0n2 00:18:49.369 [job2] 00:18:49.369 filename=/dev/nvme0n3 00:18:49.369 [job3] 00:18:49.369 filename=/dev/nvme0n4 00:18:49.369 Could not set queue depth (nvme0n1) 00:18:49.369 Could not set queue depth (nvme0n2) 00:18:49.369 Could not set queue depth (nvme0n3) 00:18:49.369 Could not set queue depth (nvme0n4) 00:18:49.630 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.630 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.630 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.630 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:49.630 fio-3.35 00:18:49.630 Starting 4 threads 00:18:51.018 00:18:51.018 job0: (groupid=0, jobs=1): err= 0: pid=4101447: Sun Jun 9 23:01:18 2024 00:18:51.018 read: IOPS=272, BW=1090KiB/s (1116kB/s)(1092KiB/1002msec) 00:18:51.018 slat (nsec): min=9006, max=61784, avg=27021.10, stdev=4211.61 00:18:51.018 clat (usec): min=1360, max=1698, avg=1516.93, stdev=55.32 00:18:51.018 lat (usec): min=1384, max=1724, avg=1543.95, stdev=55.61 00:18:51.018 clat percentiles (usec): 00:18:51.018 | 1.00th=[ 1369], 5.00th=[ 1434], 10.00th=[ 1450], 20.00th=[ 1467], 00:18:51.018 | 30.00th=[ 1500], 40.00th=[ 1500], 50.00th=[ 1516], 60.00th=[ 1532], 00:18:51.018 | 70.00th=[ 1549], 80.00th=[ 1565], 90.00th=[ 1582], 95.00th=[ 1598], 00:18:51.018 | 99.00th=[ 1647], 99.50th=[ 1663], 99.90th=[ 1696], 99.95th=[ 1696], 00:18:51.018 | 99.99th=[ 1696] 00:18:51.018 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:18:51.018 slat (usec): min=9, max=45512, avg=125.14, stdev=2009.79 00:18:51.018 clat (usec): min=768, max=1220, avg=992.38, stdev=87.07 00:18:51.018 lat (usec): min=796, max=46509, avg=1117.52, stdev=2011.91 00:18:51.018 clat percentiles (usec): 00:18:51.018 | 1.00th=[ 783], 5.00th=[ 857], 10.00th=[ 898], 20.00th=[ 922], 00:18:51.018 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 988], 60.00th=[ 1004], 00:18:51.018 | 70.00th=[ 1037], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1156], 00:18:51.018 | 99.00th=[ 1205], 99.50th=[ 1205], 99.90th=[ 1221], 99.95th=[ 1221], 00:18:51.018 | 99.99th=[ 1221] 00:18:51.018 bw ( KiB/s): min= 608, max= 3488, per=26.03%, avg=2048.00, stdev=2036.47, samples=2 00:18:51.018 iops : min= 152, max= 872, avg=512.00, stdev=509.12, samples=2 00:18:51.018 lat (usec) : 1000=37.83% 00:18:51.018 lat (msec) : 2=62.17% 00:18:51.018 cpu : usr=2.00%, sys=3.10%, ctx=788, majf=0, minf=1 00:18:51.018 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.018 issued rwts: total=273,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.018 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.018 job1: (groupid=0, jobs=1): err= 0: pid=4101462: Sun Jun 9 23:01:18 2024 00:18:51.018 read: IOPS=25, BW=104KiB/s (106kB/s)(104KiB/1002msec) 00:18:51.018 slat (nsec): min=25981, max=31601, avg=26485.27, stdev=1051.55 00:18:51.018 clat (usec): min=1464, max=42914, avg=18755.93, stdev=20479.87 00:18:51.018 lat (usec): min=1490, max=42940, avg=18782.42, stdev=20479.70 00:18:51.018 clat percentiles (usec): 00:18:51.018 | 1.00th=[ 1467], 5.00th=[ 1516], 10.00th=[ 1532], 20.00th=[ 1549], 00:18:51.019 | 30.00th=[ 1549], 40.00th=[ 1565], 50.00th=[ 1614], 60.00th=[42206], 00:18:51.019 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:18:51.019 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:51.019 | 99.99th=[42730] 00:18:51.019 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:18:51.019 slat (nsec): min=9745, max=61795, avg=34463.71, stdev=4284.75 00:18:51.019 clat (usec): min=606, max=1224, avg=956.13, stdev=105.18 00:18:51.019 lat (usec): min=658, max=1271, avg=990.59, stdev=105.91 00:18:51.019 clat percentiles (usec): 00:18:51.019 | 1.00th=[ 685], 5.00th=[ 775], 10.00th=[ 816], 20.00th=[ 873], 00:18:51.019 | 30.00th=[ 914], 40.00th=[ 938], 50.00th=[ 963], 60.00th=[ 979], 00:18:51.019 | 70.00th=[ 1012], 80.00th=[ 1037], 90.00th=[ 1090], 95.00th=[ 1123], 00:18:51.019 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1221], 99.95th=[ 1221], 00:18:51.019 | 99.99th=[ 1221] 00:18:51.019 bw ( KiB/s): min= 96, max= 4000, per=26.03%, avg=2048.00, stdev=2760.54, samples=2 00:18:51.019 iops : min= 24, max= 1000, avg=512.00, stdev=690.14, samples=2 00:18:51.019 lat (usec) : 750=3.53%, 1000=60.41% 00:18:51.019 lat (msec) : 2=34.01%, 50=2.04% 00:18:51.019 cpu : usr=1.50%, sys=2.00%, ctx=539, majf=0, minf=1 00:18:51.019 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.019 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.019 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.019 job2: (groupid=0, jobs=1): err= 0: pid=4101481: Sun Jun 9 23:01:18 2024 00:18:51.019 read: IOPS=14, BW=59.8KiB/s (61.3kB/s)(60.0KiB/1003msec) 00:18:51.019 slat (nsec): min=26606, max=28902, avg=27004.60, stdev=561.96 00:18:51.019 clat (usec): min=1556, max=42979, avg=31591.07, stdev=18693.17 00:18:51.019 lat (usec): min=1585, max=43006, avg=31618.08, stdev=18692.95 00:18:51.019 clat percentiles (usec): 00:18:51.019 | 1.00th=[ 1565], 5.00th=[ 1565], 10.00th=[ 1647], 20.00th=[ 1680], 00:18:51.019 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:51.019 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:18:51.019 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:51.019 | 99.99th=[42730] 00:18:51.019 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:18:51.019 slat (usec): min=9, max=4295, avg=44.67, stdev=188.28 00:18:51.019 clat (usec): min=649, max=1217, avg=976.09, stdev=92.78 00:18:51.019 lat (usec): min=684, max=5388, avg=1020.76, stdev=214.63 00:18:51.019 clat percentiles (usec): 00:18:51.019 | 1.00th=[ 742], 5.00th=[ 824], 10.00th=[ 857], 20.00th=[ 914], 00:18:51.019 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 971], 60.00th=[ 996], 00:18:51.019 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1106], 95.00th=[ 1123], 00:18:51.019 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1221], 99.95th=[ 1221], 00:18:51.019 | 99.99th=[ 1221] 00:18:51.019 bw ( KiB/s): min= 216, max= 3880, per=26.03%, avg=2048.00, stdev=2590.84, samples=2 00:18:51.019 iops : min= 54, max= 970, avg=512.00, stdev=647.71, samples=2 00:18:51.019 lat (usec) : 750=1.33%, 1000=57.69% 00:18:51.019 lat (msec) : 2=38.90%, 50=2.09% 00:18:51.019 cpu : usr=0.30%, sys=3.19%, ctx=530, majf=0, minf=1 00:18:51.019 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.019 issued rwts: total=15,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.019 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.019 job3: (groupid=0, jobs=1): err= 0: pid=4101488: Sun Jun 9 23:01:18 2024 00:18:51.019 read: IOPS=11, BW=46.1KiB/s (47.2kB/s)(48.0KiB/1041msec) 00:18:51.019 slat (nsec): min=25416, max=25831, avg=25674.50, stdev=123.51 00:18:51.019 clat (usec): min=41949, max=42967, avg=42328.83, stdev=450.20 00:18:51.019 lat (usec): min=41974, max=42993, avg=42354.50, stdev=450.17 00:18:51.019 clat percentiles (usec): 00:18:51.019 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42206], 20.00th=[42206], 00:18:51.019 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:51.019 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:18:51.019 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:51.019 | 99.99th=[42730] 00:18:51.019 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:18:51.019 slat (nsec): min=11086, max=66444, avg=34551.80, stdev=3321.35 00:18:51.019 clat (usec): min=771, max=1243, avg=993.71, stdev=77.96 00:18:51.019 lat (usec): min=806, max=1295, avg=1028.26, stdev=78.40 00:18:51.019 clat percentiles (usec): 00:18:51.019 | 1.00th=[ 807], 5.00th=[ 881], 10.00th=[ 906], 20.00th=[ 930], 00:18:51.019 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1020], 00:18:51.019 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:18:51.019 | 99.00th=[ 1188], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:18:51.019 | 99.99th=[ 1237] 00:18:51.019 bw ( KiB/s): min= 240, max= 3856, per=26.03%, avg=2048.00, stdev=2556.90, samples=2 00:18:51.019 iops : min= 60, max= 964, avg=512.00, stdev=639.22, samples=2 00:18:51.019 lat (usec) : 1000=53.63% 00:18:51.019 lat (msec) : 2=44.08%, 50=2.29% 00:18:51.019 cpu : usr=0.87%, sys=1.63%, ctx=525, majf=0, minf=1 00:18:51.019 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:51.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.019 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.019 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:51.019 00:18:51.019 Run status group 0 (all jobs): 00:18:51.019 READ: bw=1253KiB/s (1283kB/s), 46.1KiB/s-1090KiB/s (47.2kB/s-1116kB/s), io=1304KiB (1335kB), run=1002-1041msec 00:18:51.019 WRITE: bw=7869KiB/s (8058kB/s), 1967KiB/s-2044KiB/s (2015kB/s-2093kB/s), io=8192KiB (8389kB), run=1002-1041msec 00:18:51.019 00:18:51.019 Disk stats (read/write): 00:18:51.019 nvme0n1: ios=209/512, merge=0/0, ticks=529/475, in_queue=1004, util=86.77% 00:18:51.019 nvme0n2: ios=30/512, merge=0/0, ticks=1184/480, in_queue=1664, util=87.86% 00:18:51.019 nvme0n3: ios=59/512, merge=0/0, ticks=477/503, in_queue=980, util=95.03% 00:18:51.019 nvme0n4: ios=31/512, merge=0/0, ticks=1178/513, in_queue=1691, util=94.01% 00:18:51.019 23:01:18 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:51.019 [global] 00:18:51.019 thread=1 00:18:51.019 invalidate=1 00:18:51.019 rw=randwrite 00:18:51.019 time_based=1 00:18:51.019 runtime=1 00:18:51.019 ioengine=libaio 00:18:51.019 direct=1 00:18:51.019 bs=4096 00:18:51.019 iodepth=1 00:18:51.019 norandommap=0 00:18:51.019 numjobs=1 00:18:51.019 00:18:51.019 verify_dump=1 00:18:51.019 verify_backlog=512 00:18:51.019 verify_state_save=0 00:18:51.019 do_verify=1 00:18:51.019 verify=crc32c-intel 00:18:51.019 [job0] 00:18:51.019 filename=/dev/nvme0n1 00:18:51.019 [job1] 00:18:51.019 filename=/dev/nvme0n2 00:18:51.019 [job2] 00:18:51.019 filename=/dev/nvme0n3 00:18:51.019 [job3] 00:18:51.019 filename=/dev/nvme0n4 00:18:51.019 Could not set queue depth (nvme0n1) 00:18:51.019 Could not set queue depth (nvme0n2) 00:18:51.019 Could not set queue depth (nvme0n3) 00:18:51.019 Could not set queue depth (nvme0n4) 00:18:51.281 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.281 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.281 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.281 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:51.281 fio-3.35 00:18:51.281 Starting 4 threads 00:18:52.669 00:18:52.669 job0: (groupid=0, jobs=1): err= 0: pid=4101939: Sun Jun 9 23:01:20 2024 00:18:52.669 read: IOPS=11, BW=47.5KiB/s (48.6kB/s)(48.0KiB/1011msec) 00:18:52.669 slat (nsec): min=25056, max=25705, avg=25346.92, stdev=192.54 00:18:52.669 clat (usec): min=41867, max=42547, avg=42010.79, stdev=179.96 00:18:52.669 lat (usec): min=41892, max=42572, avg=42036.14, stdev=180.00 00:18:52.669 clat percentiles (usec): 00:18:52.669 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:18:52.669 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:52.669 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:18:52.669 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:52.669 | 99.99th=[42730] 00:18:52.669 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:18:52.669 slat (nsec): min=9968, max=65487, avg=32452.96, stdev=2625.70 00:18:52.669 clat (usec): min=708, max=1528, avg=946.38, stdev=82.32 00:18:52.669 lat (usec): min=725, max=1561, avg=978.83, stdev=82.50 00:18:52.669 clat percentiles (usec): 00:18:52.669 | 1.00th=[ 725], 5.00th=[ 807], 10.00th=[ 848], 20.00th=[ 881], 00:18:52.669 | 30.00th=[ 914], 40.00th=[ 938], 50.00th=[ 955], 60.00th=[ 971], 00:18:52.669 | 70.00th=[ 979], 80.00th=[ 1004], 90.00th=[ 1037], 95.00th=[ 1057], 00:18:52.669 | 99.00th=[ 1139], 99.50th=[ 1221], 99.90th=[ 1532], 99.95th=[ 1532], 00:18:52.669 | 99.99th=[ 1532] 00:18:52.669 bw ( KiB/s): min= 48, max= 4048, per=25.28%, avg=2048.00, stdev=2828.43, samples=2 00:18:52.669 iops : min= 12, max= 1012, avg=512.00, stdev=707.11, samples=2 00:18:52.669 lat (usec) : 750=2.10%, 1000=76.15% 00:18:52.669 lat (msec) : 2=19.47%, 50=2.29% 00:18:52.669 cpu : usr=1.09%, sys=1.39%, ctx=527, majf=0, minf=1 00:18:52.669 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.669 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.669 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:52.669 job1: (groupid=0, jobs=1): err= 0: pid=4101952: Sun Jun 9 23:01:20 2024 00:18:52.669 read: IOPS=325, BW=1303KiB/s (1334kB/s)(1304KiB/1001msec) 00:18:52.669 slat (nsec): min=8055, max=88502, avg=27361.10, stdev=4980.42 00:18:52.669 clat (usec): min=1061, max=1574, avg=1401.81, stdev=64.52 00:18:52.669 lat (usec): min=1088, max=1600, avg=1429.17, stdev=64.70 00:18:52.669 clat percentiles (usec): 00:18:52.669 | 1.00th=[ 1221], 5.00th=[ 1287], 10.00th=[ 1336], 20.00th=[ 1369], 00:18:52.669 | 30.00th=[ 1385], 40.00th=[ 1385], 50.00th=[ 1401], 60.00th=[ 1418], 00:18:52.669 | 70.00th=[ 1434], 80.00th=[ 1450], 90.00th=[ 1467], 95.00th=[ 1500], 00:18:52.669 | 99.00th=[ 1549], 99.50th=[ 1582], 99.90th=[ 1582], 99.95th=[ 1582], 00:18:52.669 | 99.99th=[ 1582] 00:18:52.669 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:52.669 slat (nsec): min=10269, max=67956, avg=32910.08, stdev=3953.81 00:18:52.669 clat (usec): min=715, max=1661, avg=995.97, stdev=122.04 00:18:52.669 lat (usec): min=750, max=1694, avg=1028.88, stdev=122.45 00:18:52.669 clat percentiles (usec): 00:18:52.669 | 1.00th=[ 742], 5.00th=[ 799], 10.00th=[ 848], 20.00th=[ 898], 00:18:52.669 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 988], 60.00th=[ 1029], 00:18:52.669 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1205], 00:18:52.669 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1663], 99.95th=[ 1663], 00:18:52.669 | 99.99th=[ 1663] 00:18:52.669 bw ( KiB/s): min= 3864, max= 3864, per=47.69%, avg=3864.00, stdev= 0.00, samples=1 00:18:52.669 iops : min= 966, max= 966, avg=966.00, stdev= 0.00, samples=1 00:18:52.669 lat (usec) : 750=0.72%, 1000=30.91% 00:18:52.669 lat (msec) : 2=68.38% 00:18:52.669 cpu : usr=2.00%, sys=2.90%, ctx=840, majf=0, minf=1 00:18:52.669 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.669 issued rwts: total=326,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.669 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:52.669 job2: (groupid=0, jobs=1): err= 0: pid=4101973: Sun Jun 9 23:01:20 2024 00:18:52.669 read: IOPS=319, BW=1279KiB/s (1309kB/s)(1280KiB/1001msec) 00:18:52.669 slat (nsec): min=26788, max=75093, avg=27926.85, stdev=4002.77 00:18:52.669 clat (usec): min=1177, max=2057, avg=1416.53, stdev=75.57 00:18:52.669 lat (usec): min=1205, max=2085, avg=1444.45, stdev=75.30 00:18:52.669 clat percentiles (usec): 00:18:52.669 | 1.00th=[ 1254], 5.00th=[ 1303], 10.00th=[ 1336], 20.00th=[ 1369], 00:18:52.669 | 30.00th=[ 1385], 40.00th=[ 1401], 50.00th=[ 1418], 60.00th=[ 1434], 00:18:52.669 | 70.00th=[ 1450], 80.00th=[ 1467], 90.00th=[ 1483], 95.00th=[ 1500], 00:18:52.669 | 99.00th=[ 1565], 99.50th=[ 1811], 99.90th=[ 2057], 99.95th=[ 2057], 00:18:52.669 | 99.99th=[ 2057] 00:18:52.669 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:52.669 slat (nsec): min=10929, max=52231, avg=33716.54, stdev=5237.67 00:18:52.669 clat (usec): min=653, max=1730, avg=1003.06, stdev=149.50 00:18:52.669 lat (usec): min=669, max=1764, avg=1036.78, stdev=149.97 00:18:52.669 clat percentiles (usec): 00:18:52.669 | 1.00th=[ 725], 5.00th=[ 807], 10.00th=[ 848], 20.00th=[ 906], 00:18:52.669 | 30.00th=[ 922], 40.00th=[ 955], 50.00th=[ 979], 60.00th=[ 1004], 00:18:52.669 | 70.00th=[ 1037], 80.00th=[ 1090], 90.00th=[ 1172], 95.00th=[ 1270], 00:18:52.669 | 99.00th=[ 1582], 99.50th=[ 1631], 99.90th=[ 1729], 99.95th=[ 1729], 00:18:52.669 | 99.99th=[ 1729] 00:18:52.670 bw ( KiB/s): min= 3808, max= 3808, per=47.00%, avg=3808.00, stdev= 0.00, samples=1 00:18:52.670 iops : min= 952, max= 952, avg=952.00, stdev= 0.00, samples=1 00:18:52.670 lat (usec) : 750=1.32%, 1000=34.13% 00:18:52.670 lat (msec) : 2=64.42%, 4=0.12% 00:18:52.670 cpu : usr=1.10%, sys=4.30%, ctx=834, majf=0, minf=1 00:18:52.670 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.670 issued rwts: total=320,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.670 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:52.670 job3: (groupid=0, jobs=1): err= 0: pid=4101981: Sun Jun 9 23:01:20 2024 00:18:52.670 read: IOPS=318, BW=1275KiB/s (1305kB/s)(1276KiB/1001msec) 00:18:52.670 slat (nsec): min=25598, max=63152, avg=27213.03, stdev=4600.98 00:18:52.670 clat (usec): min=1204, max=1601, avg=1418.21, stdev=54.99 00:18:52.670 lat (usec): min=1231, max=1627, avg=1445.42, stdev=55.00 00:18:52.670 clat percentiles (usec): 00:18:52.670 | 1.00th=[ 1303], 5.00th=[ 1319], 10.00th=[ 1352], 20.00th=[ 1385], 00:18:52.670 | 30.00th=[ 1385], 40.00th=[ 1401], 50.00th=[ 1418], 60.00th=[ 1434], 00:18:52.670 | 70.00th=[ 1450], 80.00th=[ 1467], 90.00th=[ 1483], 95.00th=[ 1500], 00:18:52.670 | 99.00th=[ 1565], 99.50th=[ 1565], 99.90th=[ 1598], 99.95th=[ 1598], 00:18:52.670 | 99.99th=[ 1598] 00:18:52.670 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:52.670 slat (nsec): min=11201, max=65851, avg=33660.61, stdev=3244.69 00:18:52.670 clat (usec): min=717, max=2842, avg=1006.13, stdev=133.79 00:18:52.670 lat (usec): min=750, max=2881, avg=1039.79, stdev=134.07 00:18:52.670 clat percentiles (usec): 00:18:52.670 | 1.00th=[ 775], 5.00th=[ 832], 10.00th=[ 881], 20.00th=[ 914], 00:18:52.670 | 30.00th=[ 947], 40.00th=[ 979], 50.00th=[ 1012], 60.00th=[ 1037], 00:18:52.670 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:18:52.670 | 99.00th=[ 1221], 99.50th=[ 1450], 99.90th=[ 2835], 99.95th=[ 2835], 00:18:52.670 | 99.99th=[ 2835] 00:18:52.670 bw ( KiB/s): min= 3808, max= 3808, per=47.00%, avg=3808.00, stdev= 0.00, samples=1 00:18:52.670 iops : min= 952, max= 952, avg=952.00, stdev= 0.00, samples=1 00:18:52.670 lat (usec) : 750=0.36%, 1000=27.68% 00:18:52.670 lat (msec) : 2=71.72%, 4=0.24% 00:18:52.670 cpu : usr=1.00%, sys=4.30%, ctx=832, majf=0, minf=1 00:18:52.670 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:52.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.670 issued rwts: total=319,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.670 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:52.670 00:18:52.670 Run status group 0 (all jobs): 00:18:52.670 READ: bw=3865KiB/s (3958kB/s), 47.5KiB/s-1303KiB/s (48.6kB/s-1334kB/s), io=3908KiB (4002kB), run=1001-1011msec 00:18:52.670 WRITE: bw=8103KiB/s (8297kB/s), 2026KiB/s-2046KiB/s (2074kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1011msec 00:18:52.670 00:18:52.670 Disk stats (read/write): 00:18:52.670 nvme0n1: ios=41/512, merge=0/0, ticks=1273/478, in_queue=1751, util=99.60% 00:18:52.670 nvme0n2: ios=238/512, merge=0/0, ticks=1245/499, in_queue=1744, util=97.35% 00:18:52.670 nvme0n3: ios=250/512, merge=0/0, ticks=720/496, in_queue=1216, util=99.58% 00:18:52.670 nvme0n4: ios=256/512, merge=0/0, ticks=607/494, in_queue=1101, util=97.97% 00:18:52.670 23:01:20 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:52.670 [global] 00:18:52.670 thread=1 00:18:52.670 invalidate=1 00:18:52.670 rw=write 00:18:52.670 time_based=1 00:18:52.670 runtime=1 00:18:52.670 ioengine=libaio 00:18:52.670 direct=1 00:18:52.670 bs=4096 00:18:52.670 iodepth=128 00:18:52.670 norandommap=0 00:18:52.670 numjobs=1 00:18:52.670 00:18:52.670 verify_dump=1 00:18:52.670 verify_backlog=512 00:18:52.670 verify_state_save=0 00:18:52.670 do_verify=1 00:18:52.670 verify=crc32c-intel 00:18:52.670 [job0] 00:18:52.670 filename=/dev/nvme0n1 00:18:52.670 [job1] 00:18:52.670 filename=/dev/nvme0n2 00:18:52.670 [job2] 00:18:52.670 filename=/dev/nvme0n3 00:18:52.670 [job3] 00:18:52.670 filename=/dev/nvme0n4 00:18:52.670 Could not set queue depth (nvme0n1) 00:18:52.670 Could not set queue depth (nvme0n2) 00:18:52.670 Could not set queue depth (nvme0n3) 00:18:52.670 Could not set queue depth (nvme0n4) 00:18:52.932 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:52.932 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:52.932 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:52.932 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:52.932 fio-3.35 00:18:52.932 Starting 4 threads 00:18:54.320 00:18:54.320 job0: (groupid=0, jobs=1): err= 0: pid=4102425: Sun Jun 9 23:01:22 2024 00:18:54.320 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:18:54.320 slat (nsec): min=882, max=19194k, avg=89893.05, stdev=705317.79 00:18:54.320 clat (usec): min=709, max=50072, avg=12910.22, stdev=7297.08 00:18:54.320 lat (usec): min=2202, max=50098, avg=13000.11, stdev=7358.21 00:18:54.320 clat percentiles (usec): 00:18:54.320 | 1.00th=[ 3982], 5.00th=[ 6521], 10.00th=[ 7046], 20.00th=[ 7635], 00:18:54.320 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[10028], 60.00th=[11338], 00:18:54.320 | 70.00th=[13304], 80.00th=[18482], 90.00th=[22414], 95.00th=[29492], 00:18:54.320 | 99.00th=[40633], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:54.320 | 99.99th=[50070] 00:18:54.320 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:18:54.320 slat (nsec): min=1592, max=9834.9k, avg=66069.73, stdev=444646.51 00:18:54.320 clat (usec): min=1967, max=47993, avg=11885.24, stdev=6238.52 00:18:54.320 lat (usec): min=2282, max=48026, avg=11951.31, stdev=6256.06 00:18:54.320 clat percentiles (usec): 00:18:54.320 | 1.00th=[ 4080], 5.00th=[ 5538], 10.00th=[ 6456], 20.00th=[ 7177], 00:18:54.320 | 30.00th=[ 8094], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[11994], 00:18:54.320 | 70.00th=[13566], 80.00th=[14746], 90.00th=[18220], 95.00th=[23462], 00:18:54.320 | 99.00th=[39584], 99.50th=[42206], 99.90th=[43254], 99.95th=[47449], 00:18:54.320 | 99.99th=[47973] 00:18:54.320 bw ( KiB/s): min=17936, max=23024, per=28.30%, avg=20480.00, stdev=3597.76, samples=2 00:18:54.320 iops : min= 4484, max= 5756, avg=5120.00, stdev=899.44, samples=2 00:18:54.320 lat (usec) : 750=0.01% 00:18:54.320 lat (msec) : 2=0.01%, 4=0.96%, 10=46.83%, 20=40.60%, 50=11.59% 00:18:54.320 lat (msec) : 100=0.01% 00:18:54.320 cpu : usr=3.80%, sys=5.10%, ctx=517, majf=0, minf=1 00:18:54.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:54.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.320 issued rwts: total=5115,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.320 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.320 job1: (groupid=0, jobs=1): err= 0: pid=4102439: Sun Jun 9 23:01:22 2024 00:18:54.320 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:18:54.320 slat (nsec): min=847, max=43920k, avg=143639.06, stdev=1138367.74 00:18:54.320 clat (usec): min=920, max=80280, avg=18272.97, stdev=13086.01 00:18:54.320 lat (usec): min=927, max=80284, avg=18416.61, stdev=13145.71 00:18:54.320 clat percentiles (usec): 00:18:54.320 | 1.00th=[ 2024], 5.00th=[ 2638], 10.00th=[ 5669], 20.00th=[10552], 00:18:54.320 | 30.00th=[11863], 40.00th=[13042], 50.00th=[15008], 60.00th=[17433], 00:18:54.320 | 70.00th=[20841], 80.00th=[24249], 90.00th=[31065], 95.00th=[44303], 00:18:54.320 | 99.00th=[72877], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:18:54.320 | 99.99th=[80217] 00:18:54.320 write: IOPS=4038, BW=15.8MiB/s (16.5MB/s)(15.9MiB/1007msec); 0 zone resets 00:18:54.320 slat (nsec): min=1495, max=10262k, avg=105327.13, stdev=608669.00 00:18:54.320 clat (usec): min=1085, max=56322, avg=15327.00, stdev=8723.05 00:18:54.320 lat (usec): min=1093, max=56326, avg=15432.33, stdev=8737.23 00:18:54.320 clat percentiles (usec): 00:18:54.320 | 1.00th=[ 1237], 5.00th=[ 2343], 10.00th=[ 5735], 20.00th=[ 8291], 00:18:54.320 | 30.00th=[11207], 40.00th=[12256], 50.00th=[13960], 60.00th=[16319], 00:18:54.320 | 70.00th=[18482], 80.00th=[21365], 90.00th=[24249], 95.00th=[28705], 00:18:54.320 | 99.00th=[45876], 99.50th=[48497], 99.90th=[51643], 99.95th=[51643], 00:18:54.320 | 99.99th=[56361] 00:18:54.320 bw ( KiB/s): min=15128, max=16384, per=21.77%, avg=15756.00, stdev=888.13, samples=2 00:18:54.320 iops : min= 3782, max= 4096, avg=3939.00, stdev=222.03, samples=2 00:18:54.320 lat (usec) : 1000=0.01% 00:18:54.320 lat (msec) : 2=2.25%, 4=5.69%, 10=12.77%, 20=50.11%, 50=27.41% 00:18:54.320 lat (msec) : 100=1.76% 00:18:54.320 cpu : usr=2.68%, sys=3.98%, ctx=370, majf=0, minf=2 00:18:54.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:54.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.320 issued rwts: total=3584,4067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.320 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.320 job2: (groupid=0, jobs=1): err= 0: pid=4102457: Sun Jun 9 23:01:22 2024 00:18:54.320 read: IOPS=5543, BW=21.7MiB/s (22.7MB/s)(22.0MiB/1016msec) 00:18:54.320 slat (nsec): min=896, max=13713k, avg=83831.86, stdev=629765.64 00:18:54.320 clat (usec): min=4376, max=28424, avg=11475.54, stdev=4232.08 00:18:54.320 lat (usec): min=4506, max=28450, avg=11559.37, stdev=4271.54 00:18:54.320 clat percentiles (usec): 00:18:54.320 | 1.00th=[ 5407], 5.00th=[ 5997], 10.00th=[ 6456], 20.00th=[ 7570], 00:18:54.320 | 30.00th=[ 8717], 40.00th=[ 9765], 50.00th=[10814], 60.00th=[12125], 00:18:54.320 | 70.00th=[13042], 80.00th=[15270], 90.00th=[17433], 95.00th=[19530], 00:18:54.320 | 99.00th=[21627], 99.50th=[23987], 99.90th=[24773], 99.95th=[25822], 00:18:54.320 | 99.99th=[28443] 00:18:54.320 write: IOPS=5746, BW=22.4MiB/s (23.5MB/s)(22.8MiB/1016msec); 0 zone resets 00:18:54.320 slat (nsec): min=1540, max=7851.1k, avg=83751.08, stdev=468943.15 00:18:54.320 clat (usec): min=1473, max=37433, avg=11002.06, stdev=6149.26 00:18:54.320 lat (usec): min=1498, max=37447, avg=11085.82, stdev=6178.65 00:18:54.320 clat percentiles (usec): 00:18:54.320 | 1.00th=[ 3294], 5.00th=[ 4686], 10.00th=[ 5342], 20.00th=[ 6390], 00:18:54.320 | 30.00th=[ 6980], 40.00th=[ 7767], 50.00th=[ 9241], 60.00th=[10814], 00:18:54.320 | 70.00th=[12518], 80.00th=[15270], 90.00th=[19268], 95.00th=[22676], 00:18:54.320 | 99.00th=[34341], 99.50th=[35914], 99.90th=[37487], 99.95th=[37487], 00:18:54.320 | 99.99th=[37487] 00:18:54.320 bw ( KiB/s): min=22056, max=23624, per=31.56%, avg=22840.00, stdev=1108.74, samples=2 00:18:54.320 iops : min= 5514, max= 5906, avg=5710.00, stdev=277.19, samples=2 00:18:54.320 lat (msec) : 2=0.03%, 4=1.55%, 10=48.15%, 20=43.79%, 50=6.48% 00:18:54.320 cpu : usr=3.65%, sys=5.42%, ctx=549, majf=0, minf=1 00:18:54.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:54.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.320 issued rwts: total=5632,5838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.320 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.320 job3: (groupid=0, jobs=1): err= 0: pid=4102464: Sun Jun 9 23:01:22 2024 00:18:54.320 read: IOPS=3584, BW=14.0MiB/s (14.7MB/s)(14.8MiB/1057msec) 00:18:54.320 slat (nsec): min=918, max=15596k, avg=124051.86, stdev=757777.30 00:18:54.320 clat (usec): min=2531, max=67640, avg=18220.66, stdev=11279.10 00:18:54.320 lat (usec): min=2539, max=67643, avg=18344.71, stdev=11311.59 00:18:54.320 clat percentiles (usec): 00:18:54.320 | 1.00th=[ 4817], 5.00th=[ 7177], 10.00th=[ 8291], 20.00th=[ 9896], 00:18:54.320 | 30.00th=[11994], 40.00th=[13566], 50.00th=[14484], 60.00th=[16909], 00:18:54.320 | 70.00th=[20841], 80.00th=[25822], 90.00th=[30016], 95.00th=[34341], 00:18:54.320 | 99.00th=[64226], 99.50th=[64750], 99.90th=[67634], 99.95th=[67634], 00:18:54.320 | 99.99th=[67634] 00:18:54.320 write: IOPS=3875, BW=15.1MiB/s (15.9MB/s)(16.0MiB/1057msec); 0 zone resets 00:18:54.320 slat (nsec): min=1628, max=10214k, avg=119298.64, stdev=682261.55 00:18:54.320 clat (usec): min=1584, max=38770, avg=15604.61, stdev=6675.23 00:18:54.320 lat (usec): min=1589, max=38773, avg=15723.91, stdev=6699.49 00:18:54.320 clat percentiles (usec): 00:18:54.320 | 1.00th=[ 4113], 5.00th=[ 7570], 10.00th=[ 9110], 20.00th=[ 9896], 00:18:54.320 | 30.00th=[10683], 40.00th=[11731], 50.00th=[14222], 60.00th=[16057], 00:18:54.320 | 70.00th=[19006], 80.00th=[21890], 90.00th=[25297], 95.00th=[28181], 00:18:54.320 | 99.00th=[32113], 99.50th=[35390], 99.90th=[37487], 99.95th=[38536], 00:18:54.320 | 99.99th=[38536] 00:18:54.320 bw ( KiB/s): min=16384, max=16384, per=22.64%, avg=16384.00, stdev= 0.00, samples=2 00:18:54.320 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:18:54.320 lat (msec) : 2=0.04%, 4=0.61%, 10=20.06%, 20=50.67%, 50=27.03% 00:18:54.320 lat (msec) : 100=1.60% 00:18:54.320 cpu : usr=2.94%, sys=3.69%, ctx=442, majf=0, minf=1 00:18:54.320 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:54.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.320 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.320 issued rwts: total=3789,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.320 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.320 00:18:54.320 Run status group 0 (all jobs): 00:18:54.320 READ: bw=67.0MiB/s (70.2MB/s), 13.9MiB/s-21.7MiB/s (14.6MB/s-22.7MB/s), io=70.8MiB (74.2MB), run=1001-1057msec 00:18:54.320 WRITE: bw=70.7MiB/s (74.1MB/s), 15.1MiB/s-22.4MiB/s (15.9MB/s-23.5MB/s), io=74.7MiB (78.3MB), run=1001-1057msec 00:18:54.320 00:18:54.320 Disk stats (read/write): 00:18:54.321 nvme0n1: ios=3768/4096, merge=0/0, ticks=34943/40233, in_queue=75176, util=97.80% 00:18:54.321 nvme0n2: ios=3112/3535, merge=0/0, ticks=24359/25517, in_queue=49876, util=97.96% 00:18:54.321 nvme0n3: ios=4661/5110, merge=0/0, ticks=44485/48545, in_queue=93030, util=100.00% 00:18:54.321 nvme0n4: ios=3105/3082, merge=0/0, ticks=21802/25161, in_queue=46963, util=95.52% 00:18:54.321 23:01:22 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:54.321 [global] 00:18:54.321 thread=1 00:18:54.321 invalidate=1 00:18:54.321 rw=randwrite 00:18:54.321 time_based=1 00:18:54.321 runtime=1 00:18:54.321 ioengine=libaio 00:18:54.321 direct=1 00:18:54.321 bs=4096 00:18:54.321 iodepth=128 00:18:54.321 norandommap=0 00:18:54.321 numjobs=1 00:18:54.321 00:18:54.321 verify_dump=1 00:18:54.321 verify_backlog=512 00:18:54.321 verify_state_save=0 00:18:54.321 do_verify=1 00:18:54.321 verify=crc32c-intel 00:18:54.321 [job0] 00:18:54.321 filename=/dev/nvme0n1 00:18:54.321 [job1] 00:18:54.321 filename=/dev/nvme0n2 00:18:54.321 [job2] 00:18:54.321 filename=/dev/nvme0n3 00:18:54.321 [job3] 00:18:54.321 filename=/dev/nvme0n4 00:18:54.321 Could not set queue depth (nvme0n1) 00:18:54.321 Could not set queue depth (nvme0n2) 00:18:54.321 Could not set queue depth (nvme0n3) 00:18:54.321 Could not set queue depth (nvme0n4) 00:18:54.581 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:54.581 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:54.581 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:54.581 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:54.581 fio-3.35 00:18:54.581 Starting 4 threads 00:18:55.964 00:18:55.964 job0: (groupid=0, jobs=1): err= 0: pid=4102944: Sun Jun 9 23:01:23 2024 00:18:55.964 read: IOPS=6885, BW=26.9MiB/s (28.2MB/s)(27.0MiB/1003msec) 00:18:55.964 slat (nsec): min=941, max=8847.5k, avg=70328.44, stdev=468427.15 00:18:55.964 clat (usec): min=1878, max=22093, avg=8999.83, stdev=3315.68 00:18:55.964 lat (usec): min=4209, max=22095, avg=9070.16, stdev=3341.95 00:18:55.964 clat percentiles (usec): 00:18:55.964 | 1.00th=[ 4752], 5.00th=[ 5145], 10.00th=[ 5407], 20.00th=[ 6194], 00:18:55.964 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 8356], 60.00th=[ 9110], 00:18:55.964 | 70.00th=[ 9634], 80.00th=[10814], 90.00th=[13566], 95.00th=[16581], 00:18:55.964 | 99.00th=[19530], 99.50th=[20317], 99.90th=[21627], 99.95th=[22152], 00:18:55.964 | 99.99th=[22152] 00:18:55.964 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:18:55.964 slat (nsec): min=1567, max=6468.1k, avg=67726.97, stdev=338555.12 00:18:55.964 clat (usec): min=1811, max=22091, avg=9027.99, stdev=3606.82 00:18:55.964 lat (usec): min=1836, max=22094, avg=9095.71, stdev=3626.99 00:18:55.964 clat percentiles (usec): 00:18:55.964 | 1.00th=[ 2900], 5.00th=[ 3851], 10.00th=[ 4621], 20.00th=[ 5669], 00:18:55.964 | 30.00th=[ 6390], 40.00th=[ 7635], 50.00th=[ 8717], 60.00th=[10028], 00:18:55.964 | 70.00th=[11076], 80.00th=[11994], 90.00th=[13960], 95.00th=[15533], 00:18:55.964 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19792], 99.95th=[20317], 00:18:55.964 | 99.99th=[22152] 00:18:55.964 bw ( KiB/s): min=28328, max=29016, per=36.55%, avg=28672.00, stdev=486.49, samples=2 00:18:55.964 iops : min= 7082, max= 7254, avg=7168.00, stdev=121.62, samples=2 00:18:55.964 lat (msec) : 2=0.03%, 4=3.29%, 10=63.29%, 20=33.07%, 50=0.33% 00:18:55.964 cpu : usr=2.59%, sys=6.59%, ctx=826, majf=0, minf=1 00:18:55.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:55.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:55.964 issued rwts: total=6906,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:55.964 job1: (groupid=0, jobs=1): err= 0: pid=4102951: Sun Jun 9 23:01:23 2024 00:18:55.964 read: IOPS=3821, BW=14.9MiB/s (15.7MB/s)(15.0MiB/1008msec) 00:18:55.964 slat (nsec): min=919, max=20519k, avg=129636.88, stdev=972981.93 00:18:55.964 clat (usec): min=3083, max=45939, avg=18649.43, stdev=6023.00 00:18:55.964 lat (usec): min=3093, max=45972, avg=18779.07, stdev=6082.22 00:18:55.964 clat percentiles (usec): 00:18:55.964 | 1.00th=[ 7439], 5.00th=[ 8848], 10.00th=[10814], 20.00th=[13173], 00:18:55.964 | 30.00th=[15401], 40.00th=[17171], 50.00th=[18482], 60.00th=[19792], 00:18:55.964 | 70.00th=[20579], 80.00th=[24773], 90.00th=[27919], 95.00th=[28443], 00:18:55.964 | 99.00th=[32113], 99.50th=[32900], 99.90th=[40109], 99.95th=[41157], 00:18:55.964 | 99.99th=[45876] 00:18:55.964 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:18:55.964 slat (nsec): min=1584, max=13008k, avg=105240.82, stdev=705204.20 00:18:55.964 clat (usec): min=1257, max=36661, avg=13616.09, stdev=5392.01 00:18:55.964 lat (usec): min=1268, max=36685, avg=13721.33, stdev=5406.51 00:18:55.964 clat percentiles (usec): 00:18:55.964 | 1.00th=[ 3556], 5.00th=[ 6783], 10.00th=[ 8291], 20.00th=[ 9634], 00:18:55.964 | 30.00th=[10683], 40.00th=[11731], 50.00th=[12649], 60.00th=[13960], 00:18:55.964 | 70.00th=[15139], 80.00th=[16909], 90.00th=[20055], 95.00th=[22414], 00:18:55.964 | 99.00th=[33817], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:18:55.964 | 99.99th=[36439] 00:18:55.964 bw ( KiB/s): min=16384, max=16384, per=20.89%, avg=16384.00, stdev= 0.00, samples=2 00:18:55.964 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:18:55.964 lat (msec) : 2=0.06%, 4=0.70%, 10=15.34%, 20=60.88%, 50=23.01% 00:18:55.964 cpu : usr=2.88%, sys=4.07%, ctx=321, majf=0, minf=1 00:18:55.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:55.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:55.964 issued rwts: total=3852,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:55.964 job2: (groupid=0, jobs=1): err= 0: pid=4102964: Sun Jun 9 23:01:23 2024 00:18:55.964 read: IOPS=3584, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1009msec) 00:18:55.964 slat (nsec): min=922, max=16475k, avg=110524.08, stdev=775978.02 00:18:55.964 clat (usec): min=1565, max=45061, avg=15637.39, stdev=6434.53 00:18:55.964 lat (usec): min=1567, max=45071, avg=15747.92, stdev=6489.00 00:18:55.964 clat percentiles (usec): 00:18:55.964 | 1.00th=[ 2573], 5.00th=[ 5080], 10.00th=[ 9110], 20.00th=[10683], 00:18:55.964 | 30.00th=[12256], 40.00th=[13304], 50.00th=[15008], 60.00th=[16188], 00:18:55.964 | 70.00th=[18220], 80.00th=[20055], 90.00th=[24249], 95.00th=[27395], 00:18:55.964 | 99.00th=[35914], 99.50th=[35914], 99.90th=[42206], 99.95th=[42730], 00:18:55.964 | 99.99th=[44827] 00:18:55.964 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:18:55.964 slat (nsec): min=1552, max=10892k, avg=101370.23, stdev=650530.91 00:18:55.964 clat (usec): min=1075, max=113784, avg=15442.86, stdev=16443.11 00:18:55.964 lat (usec): min=1083, max=113791, avg=15544.23, stdev=16483.25 00:18:55.964 clat percentiles (msec): 00:18:55.964 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 8], 00:18:55.964 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 14], 00:18:55.964 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 24], 95.00th=[ 40], 00:18:55.964 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 114], 99.95th=[ 114], 00:18:55.964 | 99.99th=[ 114] 00:18:55.964 bw ( KiB/s): min=15624, max=20480, per=23.01%, avg=18052.00, stdev=3433.71, samples=2 00:18:55.964 iops : min= 3906, max= 5120, avg=4513.00, stdev=858.43, samples=2 00:18:55.964 lat (msec) : 2=0.68%, 4=4.04%, 10=20.53%, 20=58.95%, 50=14.07% 00:18:55.964 lat (msec) : 100=0.78%, 250=0.95% 00:18:55.964 cpu : usr=2.38%, sys=4.07%, ctx=519, majf=0, minf=1 00:18:55.964 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:55.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:55.964 issued rwts: total=3617,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.964 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:55.964 job3: (groupid=0, jobs=1): err= 0: pid=4102970: Sun Jun 9 23:01:23 2024 00:18:55.965 read: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec) 00:18:55.965 slat (nsec): min=986, max=15741k, avg=118734.74, stdev=851924.66 00:18:55.965 clat (usec): min=6013, max=30543, avg=16347.29, stdev=4656.13 00:18:55.965 lat (usec): min=7027, max=30704, avg=16466.03, stdev=4692.73 00:18:55.965 clat percentiles (usec): 00:18:55.965 | 1.00th=[ 7046], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[12387], 00:18:55.965 | 30.00th=[13698], 40.00th=[14877], 50.00th=[16188], 60.00th=[17433], 00:18:55.965 | 70.00th=[19006], 80.00th=[20317], 90.00th=[22938], 95.00th=[23987], 00:18:55.965 | 99.00th=[26608], 99.50th=[29492], 99.90th=[30540], 99.95th=[30540], 00:18:55.965 | 99.99th=[30540] 00:18:55.965 write: IOPS=3958, BW=15.5MiB/s (16.2MB/s)(15.7MiB/1014msec); 0 zone resets 00:18:55.965 slat (nsec): min=1616, max=11839k, avg=135630.70, stdev=842523.34 00:18:55.965 clat (usec): min=1926, max=84324, avg=17342.58, stdev=11509.81 00:18:55.965 lat (usec): min=1941, max=84333, avg=17478.21, stdev=11564.60 00:18:55.965 clat percentiles (usec): 00:18:55.965 | 1.00th=[ 5080], 5.00th=[ 6980], 10.00th=[ 8848], 20.00th=[10945], 00:18:55.965 | 30.00th=[11994], 40.00th=[13960], 50.00th=[15533], 60.00th=[16909], 00:18:55.965 | 70.00th=[18482], 80.00th=[20055], 90.00th=[21627], 95.00th=[35914], 00:18:55.965 | 99.00th=[73925], 99.50th=[78119], 99.90th=[84411], 99.95th=[84411], 00:18:55.965 | 99.99th=[84411] 00:18:55.965 bw ( KiB/s): min=14704, max=16384, per=19.81%, avg=15544.00, stdev=1187.94, samples=2 00:18:55.965 iops : min= 3676, max= 4096, avg=3886.00, stdev=296.98, samples=2 00:18:55.965 lat (msec) : 2=0.03%, 4=0.22%, 10=12.38%, 20=66.43%, 50=19.06% 00:18:55.965 lat (msec) : 100=1.88% 00:18:55.965 cpu : usr=3.46%, sys=3.65%, ctx=329, majf=0, minf=1 00:18:55.965 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:55.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.965 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:55.965 issued rwts: total=3584,4014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.965 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:55.965 00:18:55.965 Run status group 0 (all jobs): 00:18:55.965 READ: bw=69.2MiB/s (72.5MB/s), 13.8MiB/s-26.9MiB/s (14.5MB/s-28.2MB/s), io=70.2MiB (73.6MB), run=1003-1014msec 00:18:55.965 WRITE: bw=76.6MiB/s (80.3MB/s), 15.5MiB/s-27.9MiB/s (16.2MB/s-29.3MB/s), io=77.7MiB (81.5MB), run=1003-1014msec 00:18:55.965 00:18:55.965 Disk stats (read/write): 00:18:55.965 nvme0n1: ios=5661/5678, merge=0/0, ticks=51296/52041, in_queue=103337, util=97.70% 00:18:55.965 nvme0n2: ios=3049/3072, merge=0/0, ticks=57863/38197, in_queue=96060, util=94.09% 00:18:55.965 nvme0n3: ios=3215/4096, merge=0/0, ticks=30910/39235, in_queue=70145, util=100.00% 00:18:55.965 nvme0n4: ios=3154/3584, merge=0/0, ticks=52263/51499, in_queue=103762, util=96.90% 00:18:55.965 23:01:23 -- target/fio.sh@55 -- # sync 00:18:55.965 23:01:23 -- target/fio.sh@59 -- # fio_pid=4103258 00:18:55.965 23:01:23 -- target/fio.sh@61 -- # sleep 3 00:18:55.965 23:01:23 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:55.965 [global] 00:18:55.965 thread=1 00:18:55.965 invalidate=1 00:18:55.965 rw=read 00:18:55.965 time_based=1 00:18:55.965 runtime=10 00:18:55.965 ioengine=libaio 00:18:55.965 direct=1 00:18:55.965 bs=4096 00:18:55.965 iodepth=1 00:18:55.965 norandommap=1 00:18:55.965 numjobs=1 00:18:55.965 00:18:55.965 [job0] 00:18:55.965 filename=/dev/nvme0n1 00:18:55.965 [job1] 00:18:55.965 filename=/dev/nvme0n2 00:18:55.965 [job2] 00:18:55.965 filename=/dev/nvme0n3 00:18:55.965 [job3] 00:18:55.965 filename=/dev/nvme0n4 00:18:55.965 Could not set queue depth (nvme0n1) 00:18:55.965 Could not set queue depth (nvme0n2) 00:18:55.965 Could not set queue depth (nvme0n3) 00:18:55.965 Could not set queue depth (nvme0n4) 00:18:56.224 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.224 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.224 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.224 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.224 fio-3.35 00:18:56.224 Starting 4 threads 00:18:59.528 23:01:26 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:59.528 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=6496256, buflen=4096 00:18:59.528 fio: pid=4103484, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:59.528 23:01:27 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:59.528 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=8282112, buflen=4096 00:18:59.528 fio: pid=4103481, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:59.528 23:01:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:59.528 23:01:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:59.528 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=8425472, buflen=4096 00:18:59.528 fio: pid=4103472, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:59.528 23:01:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:59.528 23:01:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:59.528 23:01:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:59.528 23:01:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:59.528 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=299008, buflen=4096 00:18:59.528 fio: pid=4103474, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:59.528 00:18:59.528 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4103472: Sun Jun 9 23:01:27 2024 00:18:59.528 read: IOPS=708, BW=2833KiB/s (2901kB/s)(8228KiB/2904msec) 00:18:59.528 slat (usec): min=7, max=28599, avg=41.52, stdev=647.00 00:18:59.528 clat (usec): min=533, max=1707, avg=1353.27, stdev=86.86 00:18:59.528 lat (usec): min=542, max=29897, avg=1394.80, stdev=651.85 00:18:59.528 clat percentiles (usec): 00:18:59.528 | 1.00th=[ 1106], 5.00th=[ 1188], 10.00th=[ 1254], 20.00th=[ 1303], 00:18:59.528 | 30.00th=[ 1319], 40.00th=[ 1336], 50.00th=[ 1369], 60.00th=[ 1385], 00:18:59.528 | 70.00th=[ 1401], 80.00th=[ 1418], 90.00th=[ 1450], 95.00th=[ 1467], 00:18:59.528 | 99.00th=[ 1549], 99.50th=[ 1582], 99.90th=[ 1598], 99.95th=[ 1663], 00:18:59.528 | 99.99th=[ 1713] 00:18:59.528 bw ( KiB/s): min= 2832, max= 2968, per=38.76%, avg=2881.60, stdev=51.35, samples=5 00:18:59.528 iops : min= 708, max= 742, avg=720.40, stdev=12.84, samples=5 00:18:59.528 lat (usec) : 750=0.15%, 1000=0.05% 00:18:59.528 lat (msec) : 2=99.76% 00:18:59.528 cpu : usr=0.72%, sys=2.10%, ctx=2061, majf=0, minf=1 00:18:59.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.528 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.528 issued rwts: total=2058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.528 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4103474: Sun Jun 9 23:01:27 2024 00:18:59.528 read: IOPS=23, BW=94.6KiB/s (96.8kB/s)(292KiB/3088msec) 00:18:59.528 slat (usec): min=15, max=8361, avg=138.72, stdev=968.94 00:18:59.528 clat (usec): min=1737, max=48950, avg=41859.65, stdev=4848.50 00:18:59.528 lat (usec): min=1798, max=50966, avg=41999.92, stdev=4959.11 00:18:59.528 clat percentiles (usec): 00:18:59.528 | 1.00th=[ 1745], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:18:59.528 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:59.528 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:18:59.528 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:18:59.528 | 99.99th=[49021] 00:18:59.528 bw ( KiB/s): min= 96, max= 96, per=1.29%, avg=96.00, stdev= 0.00, samples=5 00:18:59.528 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:18:59.528 lat (msec) : 2=1.35%, 50=97.30% 00:18:59.528 cpu : usr=0.13%, sys=0.00%, ctx=76, majf=0, minf=1 00:18:59.528 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.528 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.528 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.528 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.528 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4103481: Sun Jun 9 23:01:27 2024 00:18:59.528 read: IOPS=736, BW=2944KiB/s (3015kB/s)(8088KiB/2747msec) 00:18:59.528 slat (nsec): min=6320, max=75807, avg=23508.15, stdev=5867.76 00:18:59.528 clat (usec): min=307, max=42510, avg=1318.51, stdev=5090.48 00:18:59.528 lat (usec): min=332, max=42534, avg=1342.02, stdev=5090.55 00:18:59.528 clat percentiles (usec): 00:18:59.528 | 1.00th=[ 396], 5.00th=[ 441], 10.00th=[ 469], 20.00th=[ 529], 00:18:59.528 | 30.00th=[ 603], 40.00th=[ 652], 50.00th=[ 701], 60.00th=[ 750], 00:18:59.528 | 70.00th=[ 791], 80.00th=[ 832], 90.00th=[ 873], 95.00th=[ 898], 00:18:59.528 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:18:59.528 | 99.99th=[42730] 00:18:59.529 bw ( KiB/s): min= 304, max= 5896, per=36.81%, avg=2736.00, stdev=2697.39, samples=5 00:18:59.529 iops : min= 76, max= 1474, avg=684.00, stdev=674.35, samples=5 00:18:59.529 lat (usec) : 500=15.22%, 750=45.67%, 1000=37.27% 00:18:59.529 lat (msec) : 2=0.20%, 10=0.05%, 50=1.53% 00:18:59.529 cpu : usr=0.73%, sys=2.04%, ctx=2025, majf=0, minf=1 00:18:59.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.529 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.529 issued rwts: total=2023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.529 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4103484: Sun Jun 9 23:01:27 2024 00:18:59.529 read: IOPS=616, BW=2465KiB/s (2524kB/s)(6344KiB/2574msec) 00:18:59.529 slat (nsec): min=7161, max=71052, avg=25037.72, stdev=3699.50 00:18:59.529 clat (usec): min=1131, max=1841, avg=1585.29, stdev=82.77 00:18:59.529 lat (usec): min=1156, max=1866, avg=1610.33, stdev=82.77 00:18:59.529 clat percentiles (usec): 00:18:59.529 | 1.00th=[ 1319], 5.00th=[ 1434], 10.00th=[ 1500], 20.00th=[ 1532], 00:18:59.529 | 30.00th=[ 1565], 40.00th=[ 1582], 50.00th=[ 1598], 60.00th=[ 1614], 00:18:59.529 | 70.00th=[ 1631], 80.00th=[ 1647], 90.00th=[ 1680], 95.00th=[ 1696], 00:18:59.529 | 99.00th=[ 1745], 99.50th=[ 1762], 99.90th=[ 1811], 99.95th=[ 1844], 00:18:59.529 | 99.99th=[ 1844] 00:18:59.529 bw ( KiB/s): min= 2416, max= 2520, per=33.31%, avg=2476.80, stdev=52.34, samples=5 00:18:59.529 iops : min= 604, max= 630, avg=619.20, stdev=13.08, samples=5 00:18:59.529 lat (msec) : 2=99.94% 00:18:59.529 cpu : usr=0.66%, sys=1.79%, ctx=1590, majf=0, minf=2 00:18:59.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.529 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.529 issued rwts: total=1587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.529 00:18:59.529 Run status group 0 (all jobs): 00:18:59.529 READ: bw=7433KiB/s (7611kB/s), 94.6KiB/s-2944KiB/s (96.8kB/s-3015kB/s), io=22.4MiB (23.5MB), run=2574-3088msec 00:18:59.529 00:18:59.529 Disk stats (read/write): 00:18:59.529 nvme0n1: ios=2023/0, merge=0/0, ticks=2705/0, in_queue=2705, util=93.59% 00:18:59.529 nvme0n2: ios=67/0, merge=0/0, ticks=2796/0, in_queue=2796, util=95.27% 00:18:59.529 nvme0n3: ios=1824/0, merge=0/0, ticks=2493/0, in_queue=2493, util=95.99% 00:18:59.529 nvme0n4: ios=1448/0, merge=0/0, ticks=2253/0, in_queue=2253, util=96.06% 00:18:59.790 23:01:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:59.790 23:01:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:59.790 23:01:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:59.790 23:01:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:00.050 23:01:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:00.050 23:01:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:00.311 23:01:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:00.311 23:01:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:00.572 23:01:28 -- target/fio.sh@69 -- # fio_status=0 00:19:00.572 23:01:28 -- target/fio.sh@70 -- # wait 4103258 00:19:00.572 23:01:28 -- target/fio.sh@70 -- # fio_status=4 00:19:00.572 23:01:28 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:00.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:00.572 23:01:28 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:00.572 23:01:28 -- common/autotest_common.sh@1198 -- # local i=0 00:19:00.572 23:01:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:00.572 23:01:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:00.572 23:01:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:00.572 23:01:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:00.572 23:01:28 -- common/autotest_common.sh@1210 -- # return 0 00:19:00.572 23:01:28 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:00.572 23:01:28 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:00.572 nvmf hotplug test: fio failed as expected 00:19:00.572 23:01:28 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:00.834 23:01:28 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:00.834 23:01:28 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:00.834 23:01:28 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:00.834 23:01:28 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:00.834 23:01:28 -- target/fio.sh@91 -- # nvmftestfini 00:19:00.834 23:01:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:00.834 23:01:28 -- nvmf/common.sh@116 -- # sync 00:19:00.834 23:01:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:00.834 23:01:28 -- nvmf/common.sh@119 -- # set +e 00:19:00.834 23:01:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:00.834 23:01:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:00.834 rmmod nvme_tcp 00:19:00.834 rmmod nvme_fabrics 00:19:00.834 rmmod nvme_keyring 00:19:00.834 23:01:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:00.834 23:01:28 -- nvmf/common.sh@123 -- # set -e 00:19:00.834 23:01:28 -- nvmf/common.sh@124 -- # return 0 00:19:00.834 23:01:28 -- nvmf/common.sh@477 -- # '[' -n 4099712 ']' 00:19:00.834 23:01:28 -- nvmf/common.sh@478 -- # killprocess 4099712 00:19:00.834 23:01:28 -- common/autotest_common.sh@926 -- # '[' -z 4099712 ']' 00:19:00.834 23:01:28 -- common/autotest_common.sh@930 -- # kill -0 4099712 00:19:00.834 23:01:28 -- common/autotest_common.sh@931 -- # uname 00:19:00.834 23:01:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:00.834 23:01:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4099712 00:19:00.834 23:01:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:00.834 23:01:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:00.834 23:01:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4099712' 00:19:00.834 killing process with pid 4099712 00:19:00.834 23:01:28 -- common/autotest_common.sh@945 -- # kill 4099712 00:19:00.834 23:01:28 -- common/autotest_common.sh@950 -- # wait 4099712 00:19:01.095 23:01:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:01.096 23:01:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:01.096 23:01:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:01.096 23:01:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:01.096 23:01:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:01.096 23:01:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.096 23:01:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.096 23:01:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.010 23:01:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:03.010 00:19:03.010 real 0m27.846s 00:19:03.010 user 2m33.764s 00:19:03.010 sys 0m8.468s 00:19:03.010 23:01:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:03.010 23:01:31 -- common/autotest_common.sh@10 -- # set +x 00:19:03.010 ************************************ 00:19:03.010 END TEST nvmf_fio_target 00:19:03.010 ************************************ 00:19:03.010 23:01:31 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:03.010 23:01:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:03.010 23:01:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:03.010 23:01:31 -- common/autotest_common.sh@10 -- # set +x 00:19:03.010 ************************************ 00:19:03.010 START TEST nvmf_bdevio 00:19:03.010 ************************************ 00:19:03.010 23:01:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:03.271 * Looking for test storage... 00:19:03.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:03.271 23:01:31 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.271 23:01:31 -- nvmf/common.sh@7 -- # uname -s 00:19:03.271 23:01:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.271 23:01:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.271 23:01:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.271 23:01:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.271 23:01:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.271 23:01:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.271 23:01:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.271 23:01:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.271 23:01:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.271 23:01:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.271 23:01:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.272 23:01:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.272 23:01:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.272 23:01:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.272 23:01:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.272 23:01:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.272 23:01:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.272 23:01:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.272 23:01:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.272 23:01:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.272 23:01:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.272 23:01:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.272 23:01:31 -- paths/export.sh@5 -- # export PATH 00:19:03.272 23:01:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.272 23:01:31 -- nvmf/common.sh@46 -- # : 0 00:19:03.272 23:01:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:03.272 23:01:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:03.272 23:01:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:03.272 23:01:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.272 23:01:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.272 23:01:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:03.272 23:01:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:03.272 23:01:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:03.272 23:01:31 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:03.272 23:01:31 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:03.272 23:01:31 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:03.272 23:01:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:03.272 23:01:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.272 23:01:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:03.272 23:01:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:03.272 23:01:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:03.272 23:01:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.272 23:01:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.272 23:01:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.272 23:01:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:03.272 23:01:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:03.272 23:01:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:03.272 23:01:31 -- common/autotest_common.sh@10 -- # set +x 00:19:09.925 23:01:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:09.925 23:01:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:09.925 23:01:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:09.925 23:01:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:09.925 23:01:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:09.925 23:01:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:09.925 23:01:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:09.925 23:01:37 -- nvmf/common.sh@294 -- # net_devs=() 00:19:09.925 23:01:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:09.925 23:01:37 -- nvmf/common.sh@295 -- # e810=() 00:19:09.925 23:01:37 -- nvmf/common.sh@295 -- # local -ga e810 00:19:09.925 23:01:37 -- nvmf/common.sh@296 -- # x722=() 00:19:09.925 23:01:37 -- nvmf/common.sh@296 -- # local -ga x722 00:19:09.925 23:01:37 -- nvmf/common.sh@297 -- # mlx=() 00:19:09.925 23:01:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:09.925 23:01:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:09.925 23:01:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:09.925 23:01:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:09.925 23:01:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:09.925 23:01:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:09.925 23:01:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:09.925 23:01:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:09.925 23:01:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:09.925 23:01:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:09.925 23:01:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:09.925 23:01:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:09.925 23:01:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:09.925 23:01:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:09.925 23:01:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:09.925 23:01:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:09.925 23:01:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:09.925 23:01:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:09.925 23:01:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:09.925 23:01:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:09.925 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:09.925 23:01:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:09.925 23:01:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:09.925 23:01:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.925 23:01:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.925 23:01:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:09.925 23:01:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:09.925 23:01:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:09.925 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:09.925 23:01:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:09.925 23:01:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:09.925 23:01:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.925 23:01:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.925 23:01:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:09.925 23:01:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:09.925 23:01:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:09.926 23:01:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:09.926 23:01:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:09.926 23:01:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.926 23:01:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:09.926 23:01:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.926 23:01:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:09.926 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:09.926 23:01:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.926 23:01:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:09.926 23:01:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.926 23:01:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:09.926 23:01:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.926 23:01:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:09.926 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:09.926 23:01:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.926 23:01:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:09.926 23:01:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:09.926 23:01:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:09.926 23:01:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:09.926 23:01:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:09.926 23:01:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.926 23:01:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.926 23:01:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:09.926 23:01:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:09.926 23:01:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:09.926 23:01:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:09.926 23:01:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:09.926 23:01:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:09.926 23:01:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.926 23:01:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:09.926 23:01:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:09.926 23:01:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:09.926 23:01:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:09.926 23:01:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:09.926 23:01:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:09.926 23:01:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:09.926 23:01:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:09.926 23:01:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:09.926 23:01:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:09.926 23:01:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:09.926 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.926 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:19:09.926 00:19:09.926 --- 10.0.0.2 ping statistics --- 00:19:09.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.926 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:19:09.926 23:01:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:09.926 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.926 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.404 ms 00:19:09.926 00:19:09.926 --- 10.0.0.1 ping statistics --- 00:19:09.926 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.926 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:19:09.926 23:01:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.926 23:01:37 -- nvmf/common.sh@410 -- # return 0 00:19:09.926 23:01:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:09.926 23:01:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.926 23:01:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:09.926 23:01:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:09.926 23:01:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.926 23:01:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:09.926 23:01:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:09.926 23:01:37 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:09.926 23:01:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:09.926 23:01:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:09.926 23:01:37 -- common/autotest_common.sh@10 -- # set +x 00:19:09.926 23:01:37 -- nvmf/common.sh@469 -- # nvmfpid=4108489 00:19:09.926 23:01:37 -- nvmf/common.sh@470 -- # waitforlisten 4108489 00:19:09.926 23:01:37 -- common/autotest_common.sh@819 -- # '[' -z 4108489 ']' 00:19:09.926 23:01:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.926 23:01:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:09.926 23:01:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.926 23:01:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:09.926 23:01:37 -- common/autotest_common.sh@10 -- # set +x 00:19:09.926 23:01:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:09.926 [2024-06-09 23:01:37.617279] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:09.926 [2024-06-09 23:01:37.617334] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.926 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.926 [2024-06-09 23:01:37.701515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:09.926 [2024-06-09 23:01:37.792817] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:09.926 [2024-06-09 23:01:37.792975] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.926 [2024-06-09 23:01:37.792986] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.926 [2024-06-09 23:01:37.792994] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.926 [2024-06-09 23:01:37.793167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:09.926 [2024-06-09 23:01:37.793327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:09.926 [2024-06-09 23:01:37.793489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:09.926 [2024-06-09 23:01:37.793489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:10.499 23:01:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:10.499 23:01:38 -- common/autotest_common.sh@852 -- # return 0 00:19:10.499 23:01:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:10.499 23:01:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:10.499 23:01:38 -- common/autotest_common.sh@10 -- # set +x 00:19:10.499 23:01:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.499 23:01:38 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:10.499 23:01:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:10.499 23:01:38 -- common/autotest_common.sh@10 -- # set +x 00:19:10.499 [2024-06-09 23:01:38.437848] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.499 23:01:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:10.499 23:01:38 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:10.499 23:01:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:10.499 23:01:38 -- common/autotest_common.sh@10 -- # set +x 00:19:10.499 Malloc0 00:19:10.499 23:01:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:10.499 23:01:38 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:10.499 23:01:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:10.499 23:01:38 -- common/autotest_common.sh@10 -- # set +x 00:19:10.499 23:01:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:10.499 23:01:38 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:10.499 23:01:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:10.499 23:01:38 -- common/autotest_common.sh@10 -- # set +x 00:19:10.499 23:01:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:10.499 23:01:38 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.500 23:01:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:10.500 23:01:38 -- common/autotest_common.sh@10 -- # set +x 00:19:10.500 [2024-06-09 23:01:38.491270] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.500 23:01:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:10.500 23:01:38 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:10.500 23:01:38 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:10.500 23:01:38 -- nvmf/common.sh@520 -- # config=() 00:19:10.500 23:01:38 -- nvmf/common.sh@520 -- # local subsystem config 00:19:10.500 23:01:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:10.500 23:01:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:10.500 { 00:19:10.500 "params": { 00:19:10.500 "name": "Nvme$subsystem", 00:19:10.500 "trtype": "$TEST_TRANSPORT", 00:19:10.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:10.500 "adrfam": "ipv4", 00:19:10.500 "trsvcid": "$NVMF_PORT", 00:19:10.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:10.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:10.500 "hdgst": ${hdgst:-false}, 00:19:10.500 "ddgst": ${ddgst:-false} 00:19:10.500 }, 00:19:10.500 "method": "bdev_nvme_attach_controller" 00:19:10.500 } 00:19:10.500 EOF 00:19:10.500 )") 00:19:10.500 23:01:38 -- nvmf/common.sh@542 -- # cat 00:19:10.500 23:01:38 -- nvmf/common.sh@544 -- # jq . 00:19:10.500 23:01:38 -- nvmf/common.sh@545 -- # IFS=, 00:19:10.500 23:01:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:10.500 "params": { 00:19:10.500 "name": "Nvme1", 00:19:10.500 "trtype": "tcp", 00:19:10.500 "traddr": "10.0.0.2", 00:19:10.500 "adrfam": "ipv4", 00:19:10.500 "trsvcid": "4420", 00:19:10.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:10.500 "hdgst": false, 00:19:10.500 "ddgst": false 00:19:10.500 }, 00:19:10.500 "method": "bdev_nvme_attach_controller" 00:19:10.500 }' 00:19:10.500 [2024-06-09 23:01:38.519014] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:10.500 [2024-06-09 23:01:38.519066] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108548 ] 00:19:10.500 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.500 [2024-06-09 23:01:38.574086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:10.500 [2024-06-09 23:01:38.641464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.500 [2024-06-09 23:01:38.641484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.500 [2024-06-09 23:01:38.641487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.761 [2024-06-09 23:01:38.865594] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:10.761 [2024-06-09 23:01:38.865626] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:10.761 I/O targets: 00:19:10.761 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:10.761 00:19:10.761 00:19:10.761 CUnit - A unit testing framework for C - Version 2.1-3 00:19:10.761 http://cunit.sourceforge.net/ 00:19:10.761 00:19:10.761 00:19:10.761 Suite: bdevio tests on: Nvme1n1 00:19:10.761 Test: blockdev write read block ...passed 00:19:11.022 Test: blockdev write zeroes read block ...passed 00:19:11.022 Test: blockdev write zeroes read no split ...passed 00:19:11.022 Test: blockdev write zeroes read split ...passed 00:19:11.022 Test: blockdev write zeroes read split partial ...passed 00:19:11.022 Test: blockdev reset ...[2024-06-09 23:01:39.016723] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:11.022 [2024-06-09 23:01:39.016766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb04e0 (9): Bad file descriptor 00:19:11.023 [2024-06-09 23:01:39.073592] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:11.023 passed 00:19:11.023 Test: blockdev write read 8 blocks ...passed 00:19:11.023 Test: blockdev write read size > 128k ...passed 00:19:11.023 Test: blockdev write read invalid size ...passed 00:19:11.023 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:11.023 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:11.023 Test: blockdev write read max offset ...passed 00:19:11.285 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:11.285 Test: blockdev writev readv 8 blocks ...passed 00:19:11.285 Test: blockdev writev readv 30 x 1block ...passed 00:19:11.285 Test: blockdev writev readv block ...passed 00:19:11.285 Test: blockdev writev readv size > 128k ...passed 00:19:11.285 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:11.285 Test: blockdev comparev and writev ...[2024-06-09 23:01:39.348644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:11.285 [2024-06-09 23:01:39.348670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:11.285 [2024-06-09 23:01:39.348681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:11.285 [2024-06-09 23:01:39.348687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:11.285 [2024-06-09 23:01:39.349361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:11.285 [2024-06-09 23:01:39.349374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:11.285 [2024-06-09 23:01:39.349383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:11.285 [2024-06-09 23:01:39.349389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:11.285 [2024-06-09 23:01:39.350039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:11.285 [2024-06-09 23:01:39.350048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:11.285 [2024-06-09 23:01:39.350057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:11.285 [2024-06-09 23:01:39.350063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:11.285 [2024-06-09 23:01:39.350744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:11.285 [2024-06-09 23:01:39.350752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:11.285 [2024-06-09 23:01:39.350762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:11.285 [2024-06-09 23:01:39.350767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:11.285 passed 00:19:11.285 Test: blockdev nvme passthru rw ...passed 00:19:11.285 Test: blockdev nvme passthru vendor specific ...[2024-06-09 23:01:39.435426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:11.285 [2024-06-09 23:01:39.435437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:11.285 [2024-06-09 23:01:39.435949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:11.285 [2024-06-09 23:01:39.435957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:11.285 [2024-06-09 23:01:39.436492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:11.285 [2024-06-09 23:01:39.436500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:11.285 [2024-06-09 23:01:39.436992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:11.285 [2024-06-09 23:01:39.437000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:11.285 passed 00:19:11.285 Test: blockdev nvme admin passthru ...passed 00:19:11.546 Test: blockdev copy ...passed 00:19:11.546 00:19:11.546 Run Summary: Type Total Ran Passed Failed Inactive 00:19:11.546 suites 1 1 n/a 0 0 00:19:11.546 tests 23 23 23 0 0 00:19:11.546 asserts 152 152 152 0 n/a 00:19:11.546 00:19:11.546 Elapsed time = 1.246 seconds 00:19:11.546 23:01:39 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.546 23:01:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:11.546 23:01:39 -- common/autotest_common.sh@10 -- # set +x 00:19:11.546 23:01:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:11.547 23:01:39 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:11.547 23:01:39 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:11.547 23:01:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:11.547 23:01:39 -- nvmf/common.sh@116 -- # sync 00:19:11.547 23:01:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:11.547 23:01:39 -- nvmf/common.sh@119 -- # set +e 00:19:11.547 23:01:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:11.547 23:01:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:11.547 rmmod nvme_tcp 00:19:11.547 rmmod nvme_fabrics 00:19:11.547 rmmod nvme_keyring 00:19:11.547 23:01:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:11.547 23:01:39 -- nvmf/common.sh@123 -- # set -e 00:19:11.547 23:01:39 -- nvmf/common.sh@124 -- # return 0 00:19:11.547 23:01:39 -- nvmf/common.sh@477 -- # '[' -n 4108489 ']' 00:19:11.547 23:01:39 -- nvmf/common.sh@478 -- # killprocess 4108489 00:19:11.547 23:01:39 -- common/autotest_common.sh@926 -- # '[' -z 4108489 ']' 00:19:11.547 23:01:39 -- common/autotest_common.sh@930 -- # kill -0 4108489 00:19:11.547 23:01:39 -- common/autotest_common.sh@931 -- # uname 00:19:11.547 23:01:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:11.547 23:01:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4108489 00:19:11.807 23:01:39 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:11.807 23:01:39 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:11.807 23:01:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4108489' 00:19:11.807 killing process with pid 4108489 00:19:11.807 23:01:39 -- common/autotest_common.sh@945 -- # kill 4108489 00:19:11.807 23:01:39 -- common/autotest_common.sh@950 -- # wait 4108489 00:19:11.807 23:01:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:11.807 23:01:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:11.807 23:01:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:11.807 23:01:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:11.807 23:01:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:11.807 23:01:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.807 23:01:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.807 23:01:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.355 23:01:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:14.355 00:19:14.355 real 0m10.842s 00:19:14.355 user 0m12.395s 00:19:14.355 sys 0m5.196s 00:19:14.355 23:01:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:14.355 23:01:41 -- common/autotest_common.sh@10 -- # set +x 00:19:14.355 ************************************ 00:19:14.355 END TEST nvmf_bdevio 00:19:14.355 ************************************ 00:19:14.355 23:01:42 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:14.355 23:01:42 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:14.355 23:01:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:14.355 23:01:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:14.355 23:01:42 -- common/autotest_common.sh@10 -- # set +x 00:19:14.355 ************************************ 00:19:14.355 START TEST nvmf_bdevio_no_huge 00:19:14.355 ************************************ 00:19:14.355 23:01:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:14.355 * Looking for test storage... 00:19:14.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:14.355 23:01:42 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.355 23:01:42 -- nvmf/common.sh@7 -- # uname -s 00:19:14.355 23:01:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.355 23:01:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.355 23:01:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.355 23:01:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.355 23:01:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.355 23:01:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.355 23:01:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.355 23:01:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.355 23:01:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.355 23:01:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.355 23:01:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.355 23:01:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.355 23:01:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.355 23:01:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.355 23:01:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.355 23:01:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:14.355 23:01:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.355 23:01:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.355 23:01:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.355 23:01:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.355 23:01:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.355 23:01:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.355 23:01:42 -- paths/export.sh@5 -- # export PATH 00:19:14.355 23:01:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.355 23:01:42 -- nvmf/common.sh@46 -- # : 0 00:19:14.355 23:01:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:14.355 23:01:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:14.355 23:01:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:14.355 23:01:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.355 23:01:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.355 23:01:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:14.355 23:01:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:14.355 23:01:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:14.355 23:01:42 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:14.355 23:01:42 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:14.355 23:01:42 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:14.355 23:01:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:14.355 23:01:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:14.355 23:01:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:14.355 23:01:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:14.355 23:01:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:14.355 23:01:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.355 23:01:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.355 23:01:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:14.355 23:01:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:14.355 23:01:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:14.355 23:01:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:14.355 23:01:42 -- common/autotest_common.sh@10 -- # set +x 00:19:20.948 23:01:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:20.948 23:01:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:20.948 23:01:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:20.948 23:01:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:20.948 23:01:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:20.948 23:01:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:20.948 23:01:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:20.948 23:01:48 -- nvmf/common.sh@294 -- # net_devs=() 00:19:20.948 23:01:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:20.948 23:01:48 -- nvmf/common.sh@295 -- # e810=() 00:19:20.948 23:01:48 -- nvmf/common.sh@295 -- # local -ga e810 00:19:20.948 23:01:48 -- nvmf/common.sh@296 -- # x722=() 00:19:20.948 23:01:48 -- nvmf/common.sh@296 -- # local -ga x722 00:19:20.948 23:01:48 -- nvmf/common.sh@297 -- # mlx=() 00:19:20.948 23:01:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:20.948 23:01:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.948 23:01:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.948 23:01:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.948 23:01:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.948 23:01:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.948 23:01:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.948 23:01:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.948 23:01:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.948 23:01:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.948 23:01:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.948 23:01:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.948 23:01:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:20.948 23:01:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:20.948 23:01:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:20.948 23:01:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:20.948 23:01:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:20.948 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:20.948 23:01:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:20.948 23:01:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:20.948 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:20.948 23:01:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:20.948 23:01:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:20.948 23:01:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.948 23:01:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:20.948 23:01:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.948 23:01:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:20.948 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:20.948 23:01:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.948 23:01:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:20.948 23:01:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.948 23:01:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:20.948 23:01:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.948 23:01:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:20.948 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:20.948 23:01:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.948 23:01:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:20.948 23:01:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:20.948 23:01:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:20.948 23:01:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.948 23:01:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.948 23:01:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.948 23:01:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:20.948 23:01:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.948 23:01:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.948 23:01:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:20.948 23:01:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.948 23:01:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.948 23:01:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:20.948 23:01:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:20.948 23:01:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.948 23:01:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.948 23:01:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.948 23:01:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.948 23:01:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:20.948 23:01:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.948 23:01:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.948 23:01:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.948 23:01:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:20.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:19:20.948 00:19:20.948 --- 10.0.0.2 ping statistics --- 00:19:20.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.948 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:19:20.948 23:01:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.486 ms 00:19:20.948 00:19:20.948 --- 10.0.0.1 ping statistics --- 00:19:20.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.948 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:19:20.948 23:01:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.948 23:01:48 -- nvmf/common.sh@410 -- # return 0 00:19:20.948 23:01:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:20.948 23:01:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.948 23:01:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:20.948 23:01:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.948 23:01:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:20.948 23:01:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:20.948 23:01:48 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:20.948 23:01:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:20.948 23:01:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:20.948 23:01:48 -- common/autotest_common.sh@10 -- # set +x 00:19:20.948 23:01:48 -- nvmf/common.sh@469 -- # nvmfpid=4112867 00:19:20.948 23:01:48 -- nvmf/common.sh@470 -- # waitforlisten 4112867 00:19:20.948 23:01:48 -- common/autotest_common.sh@819 -- # '[' -z 4112867 ']' 00:19:20.948 23:01:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.948 23:01:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:20.948 23:01:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.948 23:01:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:20.948 23:01:48 -- common/autotest_common.sh@10 -- # set +x 00:19:20.948 23:01:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:20.948 [2024-06-09 23:01:48.684398] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:20.948 [2024-06-09 23:01:48.684473] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:20.948 [2024-06-09 23:01:48.778044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.948 [2024-06-09 23:01:48.883116] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:20.948 [2024-06-09 23:01:48.883264] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.948 [2024-06-09 23:01:48.883275] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.948 [2024-06-09 23:01:48.883283] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.948 [2024-06-09 23:01:48.883474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:20.948 [2024-06-09 23:01:48.883645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:20.948 [2024-06-09 23:01:48.883805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.948 [2024-06-09 23:01:48.883804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:21.520 23:01:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:21.520 23:01:49 -- common/autotest_common.sh@852 -- # return 0 00:19:21.520 23:01:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:21.520 23:01:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:21.520 23:01:49 -- common/autotest_common.sh@10 -- # set +x 00:19:21.520 23:01:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.520 23:01:49 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:21.520 23:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.520 23:01:49 -- common/autotest_common.sh@10 -- # set +x 00:19:21.520 [2024-06-09 23:01:49.511949] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.520 23:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.520 23:01:49 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:21.520 23:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.520 23:01:49 -- common/autotest_common.sh@10 -- # set +x 00:19:21.520 Malloc0 00:19:21.520 23:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.520 23:01:49 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:21.520 23:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.520 23:01:49 -- common/autotest_common.sh@10 -- # set +x 00:19:21.520 23:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.520 23:01:49 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.520 23:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.520 23:01:49 -- common/autotest_common.sh@10 -- # set +x 00:19:21.520 23:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.520 23:01:49 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.520 23:01:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:21.520 23:01:49 -- common/autotest_common.sh@10 -- # set +x 00:19:21.520 [2024-06-09 23:01:49.553886] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.520 23:01:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:21.520 23:01:49 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:21.520 23:01:49 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:21.520 23:01:49 -- nvmf/common.sh@520 -- # config=() 00:19:21.520 23:01:49 -- nvmf/common.sh@520 -- # local subsystem config 00:19:21.520 23:01:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:21.520 23:01:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:21.520 { 00:19:21.520 "params": { 00:19:21.520 "name": "Nvme$subsystem", 00:19:21.520 "trtype": "$TEST_TRANSPORT", 00:19:21.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:21.520 "adrfam": "ipv4", 00:19:21.520 "trsvcid": "$NVMF_PORT", 00:19:21.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:21.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:21.520 "hdgst": ${hdgst:-false}, 00:19:21.520 "ddgst": ${ddgst:-false} 00:19:21.520 }, 00:19:21.520 "method": "bdev_nvme_attach_controller" 00:19:21.520 } 00:19:21.520 EOF 00:19:21.520 )") 00:19:21.520 23:01:49 -- nvmf/common.sh@542 -- # cat 00:19:21.520 23:01:49 -- nvmf/common.sh@544 -- # jq . 00:19:21.520 23:01:49 -- nvmf/common.sh@545 -- # IFS=, 00:19:21.520 23:01:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:21.520 "params": { 00:19:21.520 "name": "Nvme1", 00:19:21.520 "trtype": "tcp", 00:19:21.520 "traddr": "10.0.0.2", 00:19:21.520 "adrfam": "ipv4", 00:19:21.520 "trsvcid": "4420", 00:19:21.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.520 "hdgst": false, 00:19:21.520 "ddgst": false 00:19:21.520 }, 00:19:21.520 "method": "bdev_nvme_attach_controller" 00:19:21.520 }' 00:19:21.520 [2024-06-09 23:01:49.581343] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:21.520 [2024-06-09 23:01:49.581393] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4113122 ] 00:19:21.520 [2024-06-09 23:01:49.634853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:21.781 [2024-06-09 23:01:49.727232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.781 [2024-06-09 23:01:49.727359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.781 [2024-06-09 23:01:49.727363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.781 [2024-06-09 23:01:49.906974] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:21.781 [2024-06-09 23:01:49.907001] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:21.781 I/O targets: 00:19:21.781 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:21.781 00:19:21.781 00:19:21.781 CUnit - A unit testing framework for C - Version 2.1-3 00:19:21.781 http://cunit.sourceforge.net/ 00:19:21.781 00:19:21.781 00:19:21.781 Suite: bdevio tests on: Nvme1n1 00:19:21.781 Test: blockdev write read block ...passed 00:19:22.042 Test: blockdev write zeroes read block ...passed 00:19:22.042 Test: blockdev write zeroes read no split ...passed 00:19:22.042 Test: blockdev write zeroes read split ...passed 00:19:22.042 Test: blockdev write zeroes read split partial ...passed 00:19:22.042 Test: blockdev reset ...[2024-06-09 23:01:50.149932] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.042 [2024-06-09 23:01:50.149994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1052b60 (9): Bad file descriptor 00:19:22.042 [2024-06-09 23:01:50.165389] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:22.042 passed 00:19:22.042 Test: blockdev write read 8 blocks ...passed 00:19:22.042 Test: blockdev write read size > 128k ...passed 00:19:22.042 Test: blockdev write read invalid size ...passed 00:19:22.042 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:22.042 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:22.042 Test: blockdev write read max offset ...passed 00:19:22.303 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:22.303 Test: blockdev writev readv 8 blocks ...passed 00:19:22.303 Test: blockdev writev readv 30 x 1block ...passed 00:19:22.303 Test: blockdev writev readv block ...passed 00:19:22.303 Test: blockdev writev readv size > 128k ...passed 00:19:22.303 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:22.303 Test: blockdev comparev and writev ...[2024-06-09 23:01:50.441381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.303 [2024-06-09 23:01:50.441409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:22.303 [2024-06-09 23:01:50.441420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.303 [2024-06-09 23:01:50.441426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.303 [2024-06-09 23:01:50.442081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.303 [2024-06-09 23:01:50.442091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:22.303 [2024-06-09 23:01:50.442100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.303 [2024-06-09 23:01:50.442106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:22.303 [2024-06-09 23:01:50.442741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.303 [2024-06-09 23:01:50.442750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:22.303 [2024-06-09 23:01:50.442759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.303 [2024-06-09 23:01:50.442765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:22.303 [2024-06-09 23:01:50.443406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.303 [2024-06-09 23:01:50.443415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:22.303 [2024-06-09 23:01:50.443424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.303 [2024-06-09 23:01:50.443429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:22.564 passed 00:19:22.564 Test: blockdev nvme passthru rw ...passed 00:19:22.564 Test: blockdev nvme passthru vendor specific ...[2024-06-09 23:01:50.528453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.564 [2024-06-09 23:01:50.528464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:22.564 [2024-06-09 23:01:50.529021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.564 [2024-06-09 23:01:50.529029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:22.564 [2024-06-09 23:01:50.529551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.564 [2024-06-09 23:01:50.529559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:22.564 [2024-06-09 23:01:50.530102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.564 [2024-06-09 23:01:50.530110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:22.564 passed 00:19:22.564 Test: blockdev nvme admin passthru ...passed 00:19:22.564 Test: blockdev copy ...passed 00:19:22.564 00:19:22.565 Run Summary: Type Total Ran Passed Failed Inactive 00:19:22.565 suites 1 1 n/a 0 0 00:19:22.565 tests 23 23 23 0 0 00:19:22.565 asserts 152 152 152 0 n/a 00:19:22.565 00:19:22.565 Elapsed time = 1.354 seconds 00:19:22.826 23:01:50 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:22.826 23:01:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.826 23:01:50 -- common/autotest_common.sh@10 -- # set +x 00:19:22.826 23:01:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.826 23:01:50 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:22.826 23:01:50 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:22.826 23:01:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:22.826 23:01:50 -- nvmf/common.sh@116 -- # sync 00:19:22.826 23:01:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:22.826 23:01:50 -- nvmf/common.sh@119 -- # set +e 00:19:22.826 23:01:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:22.826 23:01:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:22.826 rmmod nvme_tcp 00:19:22.826 rmmod nvme_fabrics 00:19:22.826 rmmod nvme_keyring 00:19:22.826 23:01:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:22.826 23:01:50 -- nvmf/common.sh@123 -- # set -e 00:19:22.826 23:01:50 -- nvmf/common.sh@124 -- # return 0 00:19:22.826 23:01:50 -- nvmf/common.sh@477 -- # '[' -n 4112867 ']' 00:19:22.826 23:01:50 -- nvmf/common.sh@478 -- # killprocess 4112867 00:19:22.826 23:01:50 -- common/autotest_common.sh@926 -- # '[' -z 4112867 ']' 00:19:22.826 23:01:50 -- common/autotest_common.sh@930 -- # kill -0 4112867 00:19:22.826 23:01:50 -- common/autotest_common.sh@931 -- # uname 00:19:22.826 23:01:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:22.826 23:01:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4112867 00:19:22.826 23:01:50 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:22.826 23:01:50 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:22.826 23:01:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4112867' 00:19:22.826 killing process with pid 4112867 00:19:22.826 23:01:50 -- common/autotest_common.sh@945 -- # kill 4112867 00:19:22.826 23:01:50 -- common/autotest_common.sh@950 -- # wait 4112867 00:19:23.399 23:01:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:23.399 23:01:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:23.399 23:01:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:23.399 23:01:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.399 23:01:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:23.399 23:01:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.399 23:01:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.399 23:01:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.310 23:01:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:25.310 00:19:25.310 real 0m11.365s 00:19:25.310 user 0m13.367s 00:19:25.310 sys 0m5.796s 00:19:25.310 23:01:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.310 23:01:53 -- common/autotest_common.sh@10 -- # set +x 00:19:25.310 ************************************ 00:19:25.310 END TEST nvmf_bdevio_no_huge 00:19:25.310 ************************************ 00:19:25.310 23:01:53 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:25.310 23:01:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:25.310 23:01:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:25.310 23:01:53 -- common/autotest_common.sh@10 -- # set +x 00:19:25.310 ************************************ 00:19:25.310 START TEST nvmf_tls 00:19:25.310 ************************************ 00:19:25.310 23:01:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:25.572 * Looking for test storage... 00:19:25.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.572 23:01:53 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.572 23:01:53 -- nvmf/common.sh@7 -- # uname -s 00:19:25.572 23:01:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.572 23:01:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.572 23:01:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.572 23:01:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.572 23:01:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.572 23:01:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.572 23:01:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.572 23:01:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.572 23:01:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.572 23:01:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.572 23:01:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.572 23:01:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.572 23:01:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.572 23:01:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.572 23:01:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.572 23:01:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.572 23:01:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.572 23:01:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.572 23:01:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.572 23:01:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.572 23:01:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.572 23:01:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.572 23:01:53 -- paths/export.sh@5 -- # export PATH 00:19:25.572 23:01:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.572 23:01:53 -- nvmf/common.sh@46 -- # : 0 00:19:25.572 23:01:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:25.572 23:01:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:25.572 23:01:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:25.572 23:01:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.572 23:01:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.572 23:01:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:25.572 23:01:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:25.572 23:01:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:25.572 23:01:53 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.572 23:01:53 -- target/tls.sh@71 -- # nvmftestinit 00:19:25.572 23:01:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:25.572 23:01:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.572 23:01:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:25.572 23:01:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:25.572 23:01:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:25.572 23:01:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.572 23:01:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.572 23:01:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.572 23:01:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:25.572 23:01:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:25.572 23:01:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:25.572 23:01:53 -- common/autotest_common.sh@10 -- # set +x 00:19:32.158 23:02:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:32.158 23:02:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:32.158 23:02:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:32.158 23:02:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:32.158 23:02:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:32.158 23:02:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:32.158 23:02:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:32.158 23:02:00 -- nvmf/common.sh@294 -- # net_devs=() 00:19:32.158 23:02:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:32.158 23:02:00 -- nvmf/common.sh@295 -- # e810=() 00:19:32.158 23:02:00 -- nvmf/common.sh@295 -- # local -ga e810 00:19:32.158 23:02:00 -- nvmf/common.sh@296 -- # x722=() 00:19:32.158 23:02:00 -- nvmf/common.sh@296 -- # local -ga x722 00:19:32.158 23:02:00 -- nvmf/common.sh@297 -- # mlx=() 00:19:32.158 23:02:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:32.158 23:02:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.158 23:02:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.158 23:02:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.158 23:02:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.158 23:02:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.158 23:02:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.158 23:02:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.158 23:02:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.158 23:02:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.158 23:02:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.158 23:02:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.158 23:02:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:32.158 23:02:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:32.158 23:02:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:32.158 23:02:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:32.158 23:02:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:32.158 23:02:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:32.158 23:02:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:32.158 23:02:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:32.158 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:32.158 23:02:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:32.158 23:02:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:32.158 23:02:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.158 23:02:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.158 23:02:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:32.158 23:02:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:32.158 23:02:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:32.158 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:32.158 23:02:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:32.158 23:02:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:32.158 23:02:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.158 23:02:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.158 23:02:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:32.158 23:02:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:32.159 23:02:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:32.159 23:02:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:32.159 23:02:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:32.159 23:02:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.159 23:02:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:32.159 23:02:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.159 23:02:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:32.159 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:32.159 23:02:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.159 23:02:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:32.159 23:02:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.159 23:02:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:32.159 23:02:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.159 23:02:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:32.159 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:32.159 23:02:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.159 23:02:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:32.159 23:02:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:32.159 23:02:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:32.159 23:02:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:32.159 23:02:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:32.159 23:02:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:32.159 23:02:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:32.159 23:02:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.159 23:02:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:32.159 23:02:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:32.159 23:02:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:32.159 23:02:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:32.159 23:02:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:32.159 23:02:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:32.159 23:02:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:32.159 23:02:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:32.159 23:02:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:32.159 23:02:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:32.159 23:02:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:32.159 23:02:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:32.159 23:02:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:32.159 23:02:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:32.420 23:02:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:32.420 23:02:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:32.420 23:02:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:32.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:32.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.545 ms 00:19:32.420 00:19:32.420 --- 10.0.0.2 ping statistics --- 00:19:32.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.420 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:19:32.420 23:02:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:32.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:32.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.425 ms 00:19:32.420 00:19:32.420 --- 10.0.0.1 ping statistics --- 00:19:32.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:32.420 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:19:32.420 23:02:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:32.420 23:02:00 -- nvmf/common.sh@410 -- # return 0 00:19:32.420 23:02:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:32.420 23:02:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:32.420 23:02:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:32.420 23:02:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:32.420 23:02:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:32.420 23:02:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:32.420 23:02:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:32.420 23:02:00 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:32.420 23:02:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:32.420 23:02:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:32.420 23:02:00 -- common/autotest_common.sh@10 -- # set +x 00:19:32.420 23:02:00 -- nvmf/common.sh@469 -- # nvmfpid=4117569 00:19:32.420 23:02:00 -- nvmf/common.sh@470 -- # waitforlisten 4117569 00:19:32.421 23:02:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:32.421 23:02:00 -- common/autotest_common.sh@819 -- # '[' -z 4117569 ']' 00:19:32.421 23:02:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.421 23:02:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:32.421 23:02:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.421 23:02:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:32.421 23:02:00 -- common/autotest_common.sh@10 -- # set +x 00:19:32.421 [2024-06-09 23:02:00.495949] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:32.421 [2024-06-09 23:02:00.496014] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.421 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.421 [2024-06-09 23:02:00.566375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.682 [2024-06-09 23:02:00.636970] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:32.682 [2024-06-09 23:02:00.637087] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.682 [2024-06-09 23:02:00.637095] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.682 [2024-06-09 23:02:00.637108] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.682 [2024-06-09 23:02:00.637130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.257 23:02:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:33.257 23:02:01 -- common/autotest_common.sh@852 -- # return 0 00:19:33.257 23:02:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:33.257 23:02:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:33.257 23:02:01 -- common/autotest_common.sh@10 -- # set +x 00:19:33.257 23:02:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.257 23:02:01 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:19:33.257 23:02:01 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:33.567 true 00:19:33.567 23:02:01 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:33.567 23:02:01 -- target/tls.sh@82 -- # jq -r .tls_version 00:19:33.567 23:02:01 -- target/tls.sh@82 -- # version=0 00:19:33.567 23:02:01 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:19:33.567 23:02:01 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:33.827 23:02:01 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:33.827 23:02:01 -- target/tls.sh@90 -- # jq -r .tls_version 00:19:33.827 23:02:01 -- target/tls.sh@90 -- # version=13 00:19:33.827 23:02:01 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:19:33.827 23:02:01 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:34.087 23:02:02 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:34.087 23:02:02 -- target/tls.sh@98 -- # jq -r .tls_version 00:19:34.087 23:02:02 -- target/tls.sh@98 -- # version=7 00:19:34.087 23:02:02 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:19:34.087 23:02:02 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:34.087 23:02:02 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:34.348 23:02:02 -- target/tls.sh@105 -- # ktls=false 00:19:34.348 23:02:02 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:19:34.348 23:02:02 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:34.348 23:02:02 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:34.348 23:02:02 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:34.609 23:02:02 -- target/tls.sh@113 -- # ktls=true 00:19:34.609 23:02:02 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:19:34.609 23:02:02 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:34.870 23:02:02 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:34.870 23:02:02 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:19:34.870 23:02:02 -- target/tls.sh@121 -- # ktls=false 00:19:34.870 23:02:02 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:19:34.870 23:02:02 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:19:34.870 23:02:02 -- target/tls.sh@49 -- # local key hash crc 00:19:34.870 23:02:02 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:19:34.870 23:02:02 -- target/tls.sh@51 -- # hash=01 00:19:34.870 23:02:02 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:19:34.870 23:02:02 -- target/tls.sh@52 -- # gzip -1 -c 00:19:34.870 23:02:02 -- target/tls.sh@52 -- # tail -c8 00:19:34.870 23:02:02 -- target/tls.sh@52 -- # head -c 4 00:19:34.870 23:02:02 -- target/tls.sh@52 -- # crc='p$H�' 00:19:34.870 23:02:02 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:34.870 23:02:02 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:19:34.870 23:02:02 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:34.870 23:02:02 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:34.870 23:02:02 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:19:34.870 23:02:02 -- target/tls.sh@49 -- # local key hash crc 00:19:34.870 23:02:02 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:19:34.870 23:02:02 -- target/tls.sh@51 -- # hash=01 00:19:34.870 23:02:02 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:19:34.870 23:02:02 -- target/tls.sh@52 -- # gzip -1 -c 00:19:34.870 23:02:02 -- target/tls.sh@52 -- # tail -c8 00:19:34.870 23:02:02 -- target/tls.sh@52 -- # head -c 4 00:19:34.870 23:02:02 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:19:34.870 23:02:02 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:34.870 23:02:02 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:19:34.870 23:02:02 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:34.870 23:02:02 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:34.870 23:02:02 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:34.870 23:02:02 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:34.870 23:02:02 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:34.870 23:02:02 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:34.870 23:02:02 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:34.870 23:02:02 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:34.870 23:02:02 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:35.131 23:02:03 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:35.391 23:02:03 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:35.391 23:02:03 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:35.391 23:02:03 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:35.391 [2024-06-09 23:02:03.484805] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.391 23:02:03 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:35.651 23:02:03 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:35.651 [2024-06-09 23:02:03.785558] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:35.651 [2024-06-09 23:02:03.785751] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.651 23:02:03 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:35.912 malloc0 00:19:35.912 23:02:03 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:35.912 23:02:04 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:36.172 23:02:04 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:36.172 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.175 Initializing NVMe Controllers 00:19:46.175 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:46.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:46.176 Initialization complete. Launching workers. 00:19:46.176 ======================================================== 00:19:46.176 Latency(us) 00:19:46.176 Device Information : IOPS MiB/s Average min max 00:19:46.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13883.19 54.23 4610.41 1086.88 5262.95 00:19:46.176 ======================================================== 00:19:46.176 Total : 13883.19 54.23 4610.41 1086.88 5262.95 00:19:46.176 00:19:46.176 23:02:14 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:46.176 23:02:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:46.176 23:02:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:46.176 23:02:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:46.176 23:02:14 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:46.176 23:02:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:46.176 23:02:14 -- target/tls.sh@28 -- # bdevperf_pid=4120349 00:19:46.176 23:02:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:46.176 23:02:14 -- target/tls.sh@31 -- # waitforlisten 4120349 /var/tmp/bdevperf.sock 00:19:46.176 23:02:14 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:46.176 23:02:14 -- common/autotest_common.sh@819 -- # '[' -z 4120349 ']' 00:19:46.176 23:02:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.176 23:02:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:46.176 23:02:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.176 23:02:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:46.176 23:02:14 -- common/autotest_common.sh@10 -- # set +x 00:19:46.436 [2024-06-09 23:02:14.355952] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:46.436 [2024-06-09 23:02:14.356007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4120349 ] 00:19:46.436 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.436 [2024-06-09 23:02:14.405640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.436 [2024-06-09 23:02:14.456471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.008 23:02:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:47.008 23:02:15 -- common/autotest_common.sh@852 -- # return 0 00:19:47.008 23:02:15 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:47.270 [2024-06-09 23:02:15.261118] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.270 TLSTESTn1 00:19:47.270 23:02:15 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:47.270 Running I/O for 10 seconds... 00:19:59.511 00:19:59.511 Latency(us) 00:19:59.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.511 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:59.511 Verification LBA range: start 0x0 length 0x2000 00:19:59.511 TLSTESTn1 : 10.06 1397.11 5.46 0.00 0.00 91414.17 10649.60 95245.65 00:19:59.511 =================================================================================================================== 00:19:59.511 Total : 1397.11 5.46 0.00 0.00 91414.17 10649.60 95245.65 00:19:59.511 0 00:19:59.511 23:02:25 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:59.511 23:02:25 -- target/tls.sh@45 -- # killprocess 4120349 00:19:59.511 23:02:25 -- common/autotest_common.sh@926 -- # '[' -z 4120349 ']' 00:19:59.511 23:02:25 -- common/autotest_common.sh@930 -- # kill -0 4120349 00:19:59.511 23:02:25 -- common/autotest_common.sh@931 -- # uname 00:19:59.511 23:02:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:59.511 23:02:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4120349 00:19:59.511 23:02:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:59.511 23:02:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:59.511 23:02:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4120349' 00:19:59.511 killing process with pid 4120349 00:19:59.511 23:02:25 -- common/autotest_common.sh@945 -- # kill 4120349 00:19:59.511 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.511 00:19:59.511 Latency(us) 00:19:59.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.511 =================================================================================================================== 00:19:59.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.511 23:02:25 -- common/autotest_common.sh@950 -- # wait 4120349 00:19:59.511 23:02:25 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:59.511 23:02:25 -- common/autotest_common.sh@640 -- # local es=0 00:19:59.511 23:02:25 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:59.511 23:02:25 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:59.511 23:02:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:59.511 23:02:25 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:59.511 23:02:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:59.512 23:02:25 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:59.512 23:02:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:59.512 23:02:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:59.512 23:02:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:59.512 23:02:25 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:19:59.512 23:02:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:59.512 23:02:25 -- target/tls.sh@28 -- # bdevperf_pid=4122533 00:19:59.512 23:02:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:59.512 23:02:25 -- target/tls.sh@31 -- # waitforlisten 4122533 /var/tmp/bdevperf.sock 00:19:59.512 23:02:25 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:59.512 23:02:25 -- common/autotest_common.sh@819 -- # '[' -z 4122533 ']' 00:19:59.512 23:02:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.512 23:02:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:59.512 23:02:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.512 23:02:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:59.512 23:02:25 -- common/autotest_common.sh@10 -- # set +x 00:19:59.512 [2024-06-09 23:02:25.773902] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:59.512 [2024-06-09 23:02:25.773958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4122533 ] 00:19:59.512 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.512 [2024-06-09 23:02:25.823918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.512 [2024-06-09 23:02:25.874173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.512 23:02:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:59.512 23:02:26 -- common/autotest_common.sh@852 -- # return 0 00:19:59.512 23:02:26 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:59.512 [2024-06-09 23:02:26.675269] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.512 [2024-06-09 23:02:26.679938] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:59.512 [2024-06-09 23:02:26.680570] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd2e10 (107): Transport endpoint is not connected 00:19:59.512 [2024-06-09 23:02:26.681563] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd2e10 (9): Bad file descriptor 00:19:59.512 [2024-06-09 23:02:26.682565] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:59.512 [2024-06-09 23:02:26.682572] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:59.512 [2024-06-09 23:02:26.682579] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:59.512 request: 00:19:59.512 { 00:19:59.512 "name": "TLSTEST", 00:19:59.512 "trtype": "tcp", 00:19:59.512 "traddr": "10.0.0.2", 00:19:59.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.512 "adrfam": "ipv4", 00:19:59.512 "trsvcid": "4420", 00:19:59.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.512 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:19:59.512 "method": "bdev_nvme_attach_controller", 00:19:59.512 "req_id": 1 00:19:59.512 } 00:19:59.512 Got JSON-RPC error response 00:19:59.512 response: 00:19:59.512 { 00:19:59.512 "code": -32602, 00:19:59.512 "message": "Invalid parameters" 00:19:59.512 } 00:19:59.512 23:02:26 -- target/tls.sh@36 -- # killprocess 4122533 00:19:59.512 23:02:26 -- common/autotest_common.sh@926 -- # '[' -z 4122533 ']' 00:19:59.512 23:02:26 -- common/autotest_common.sh@930 -- # kill -0 4122533 00:19:59.512 23:02:26 -- common/autotest_common.sh@931 -- # uname 00:19:59.512 23:02:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:59.512 23:02:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4122533 00:19:59.512 23:02:26 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:59.512 23:02:26 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:59.512 23:02:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4122533' 00:19:59.512 killing process with pid 4122533 00:19:59.512 23:02:26 -- common/autotest_common.sh@945 -- # kill 4122533 00:19:59.512 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.512 00:19:59.512 Latency(us) 00:19:59.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.512 =================================================================================================================== 00:19:59.512 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:59.512 23:02:26 -- common/autotest_common.sh@950 -- # wait 4122533 00:19:59.512 23:02:26 -- target/tls.sh@37 -- # return 1 00:19:59.512 23:02:26 -- common/autotest_common.sh@643 -- # es=1 00:19:59.512 23:02:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:59.512 23:02:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:59.512 23:02:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:59.512 23:02:26 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:59.512 23:02:26 -- common/autotest_common.sh@640 -- # local es=0 00:19:59.512 23:02:26 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:59.512 23:02:26 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:59.512 23:02:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:59.512 23:02:26 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:59.512 23:02:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:59.512 23:02:26 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:59.512 23:02:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:59.512 23:02:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:59.512 23:02:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:59.512 23:02:26 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:59.512 23:02:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:59.512 23:02:26 -- target/tls.sh@28 -- # bdevperf_pid=4122737 00:19:59.512 23:02:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:59.512 23:02:26 -- target/tls.sh@31 -- # waitforlisten 4122737 /var/tmp/bdevperf.sock 00:19:59.512 23:02:26 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:59.512 23:02:26 -- common/autotest_common.sh@819 -- # '[' -z 4122737 ']' 00:19:59.512 23:02:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.512 23:02:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:59.512 23:02:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.512 23:02:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:59.512 23:02:26 -- common/autotest_common.sh@10 -- # set +x 00:19:59.512 [2024-06-09 23:02:26.915677] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:59.512 [2024-06-09 23:02:26.915731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4122737 ] 00:19:59.512 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.512 [2024-06-09 23:02:26.965392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.512 [2024-06-09 23:02:27.015316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.512 23:02:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:59.512 23:02:27 -- common/autotest_common.sh@852 -- # return 0 00:19:59.512 23:02:27 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:59.774 [2024-06-09 23:02:27.820091] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.774 [2024-06-09 23:02:27.831417] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:59.774 [2024-06-09 23:02:27.831440] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:59.774 [2024-06-09 23:02:27.831463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:59.774 [2024-06-09 23:02:27.832309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1628e10 (107): Transport endpoint is not connected 00:19:59.774 [2024-06-09 23:02:27.833304] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1628e10 (9): Bad file descriptor 00:19:59.774 [2024-06-09 23:02:27.834306] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:59.774 [2024-06-09 23:02:27.834314] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:59.774 [2024-06-09 23:02:27.834320] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:59.774 request: 00:19:59.774 { 00:19:59.774 "name": "TLSTEST", 00:19:59.774 "trtype": "tcp", 00:19:59.774 "traddr": "10.0.0.2", 00:19:59.774 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:59.774 "adrfam": "ipv4", 00:19:59.774 "trsvcid": "4420", 00:19:59.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.774 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:19:59.774 "method": "bdev_nvme_attach_controller", 00:19:59.774 "req_id": 1 00:19:59.774 } 00:19:59.774 Got JSON-RPC error response 00:19:59.774 response: 00:19:59.774 { 00:19:59.774 "code": -32602, 00:19:59.774 "message": "Invalid parameters" 00:19:59.774 } 00:19:59.774 23:02:27 -- target/tls.sh@36 -- # killprocess 4122737 00:19:59.774 23:02:27 -- common/autotest_common.sh@926 -- # '[' -z 4122737 ']' 00:19:59.774 23:02:27 -- common/autotest_common.sh@930 -- # kill -0 4122737 00:19:59.774 23:02:27 -- common/autotest_common.sh@931 -- # uname 00:19:59.774 23:02:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:59.774 23:02:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4122737 00:19:59.774 23:02:27 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:59.774 23:02:27 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:59.774 23:02:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4122737' 00:19:59.774 killing process with pid 4122737 00:19:59.774 23:02:27 -- common/autotest_common.sh@945 -- # kill 4122737 00:19:59.774 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.774 00:19:59.774 Latency(us) 00:19:59.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.774 =================================================================================================================== 00:19:59.774 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:59.774 23:02:27 -- common/autotest_common.sh@950 -- # wait 4122737 00:20:00.036 23:02:28 -- target/tls.sh@37 -- # return 1 00:20:00.036 23:02:28 -- common/autotest_common.sh@643 -- # es=1 00:20:00.036 23:02:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:00.036 23:02:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:00.036 23:02:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:00.036 23:02:28 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:00.036 23:02:28 -- common/autotest_common.sh@640 -- # local es=0 00:20:00.036 23:02:28 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:00.036 23:02:28 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:00.036 23:02:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:00.036 23:02:28 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:00.036 23:02:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:00.036 23:02:28 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:00.036 23:02:28 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:00.036 23:02:28 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:00.036 23:02:28 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:00.036 23:02:28 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:00.036 23:02:28 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.036 23:02:28 -- target/tls.sh@28 -- # bdevperf_pid=4123080 00:20:00.036 23:02:28 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:00.036 23:02:28 -- target/tls.sh@31 -- # waitforlisten 4123080 /var/tmp/bdevperf.sock 00:20:00.036 23:02:28 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:00.036 23:02:28 -- common/autotest_common.sh@819 -- # '[' -z 4123080 ']' 00:20:00.036 23:02:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.036 23:02:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:00.036 23:02:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.036 23:02:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:00.036 23:02:28 -- common/autotest_common.sh@10 -- # set +x 00:20:00.036 [2024-06-09 23:02:28.069342] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:00.036 [2024-06-09 23:02:28.069396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4123080 ] 00:20:00.036 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.036 [2024-06-09 23:02:28.119200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.036 [2024-06-09 23:02:28.169201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.981 23:02:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:00.981 23:02:28 -- common/autotest_common.sh@852 -- # return 0 00:20:00.982 23:02:28 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:00.982 [2024-06-09 23:02:28.953808] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.982 [2024-06-09 23:02:28.964210] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:00.982 [2024-06-09 23:02:28.964231] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:00.982 [2024-06-09 23:02:28.964254] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:00.982 [2024-06-09 23:02:28.965170] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170ee10 (107): Transport endpoint is not connected 00:20:00.982 [2024-06-09 23:02:28.966165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170ee10 (9): Bad file descriptor 00:20:00.982 [2024-06-09 23:02:28.967167] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:00.982 [2024-06-09 23:02:28.967175] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:00.982 [2024-06-09 23:02:28.967182] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:00.982 request: 00:20:00.982 { 00:20:00.982 "name": "TLSTEST", 00:20:00.982 "trtype": "tcp", 00:20:00.982 "traddr": "10.0.0.2", 00:20:00.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.982 "adrfam": "ipv4", 00:20:00.982 "trsvcid": "4420", 00:20:00.982 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:00.982 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:00.982 "method": "bdev_nvme_attach_controller", 00:20:00.982 "req_id": 1 00:20:00.982 } 00:20:00.982 Got JSON-RPC error response 00:20:00.982 response: 00:20:00.982 { 00:20:00.982 "code": -32602, 00:20:00.982 "message": "Invalid parameters" 00:20:00.982 } 00:20:00.982 23:02:28 -- target/tls.sh@36 -- # killprocess 4123080 00:20:00.982 23:02:28 -- common/autotest_common.sh@926 -- # '[' -z 4123080 ']' 00:20:00.982 23:02:28 -- common/autotest_common.sh@930 -- # kill -0 4123080 00:20:00.982 23:02:28 -- common/autotest_common.sh@931 -- # uname 00:20:00.982 23:02:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:00.982 23:02:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4123080 00:20:00.982 23:02:29 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:00.982 23:02:29 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:00.982 23:02:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4123080' 00:20:00.982 killing process with pid 4123080 00:20:00.982 23:02:29 -- common/autotest_common.sh@945 -- # kill 4123080 00:20:00.982 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.982 00:20:00.982 Latency(us) 00:20:00.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.982 =================================================================================================================== 00:20:00.982 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.982 23:02:29 -- common/autotest_common.sh@950 -- # wait 4123080 00:20:00.982 23:02:29 -- target/tls.sh@37 -- # return 1 00:20:00.982 23:02:29 -- common/autotest_common.sh@643 -- # es=1 00:20:00.982 23:02:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:00.982 23:02:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:00.982 23:02:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:00.982 23:02:29 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:00.982 23:02:29 -- common/autotest_common.sh@640 -- # local es=0 00:20:00.982 23:02:29 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:00.982 23:02:29 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:00.982 23:02:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:00.982 23:02:29 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:00.982 23:02:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:00.982 23:02:29 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:00.982 23:02:29 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:00.982 23:02:29 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:00.982 23:02:29 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:00.982 23:02:29 -- target/tls.sh@23 -- # psk= 00:20:00.982 23:02:29 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.982 23:02:29 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:00.982 23:02:29 -- target/tls.sh@28 -- # bdevperf_pid=4123234 00:20:01.244 23:02:29 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:01.244 23:02:29 -- target/tls.sh@31 -- # waitforlisten 4123234 /var/tmp/bdevperf.sock 00:20:01.244 23:02:29 -- common/autotest_common.sh@819 -- # '[' -z 4123234 ']' 00:20:01.244 23:02:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.244 23:02:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:01.244 23:02:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.244 23:02:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:01.244 23:02:29 -- common/autotest_common.sh@10 -- # set +x 00:20:01.244 [2024-06-09 23:02:29.184947] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:01.244 [2024-06-09 23:02:29.185002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4123234 ] 00:20:01.244 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.244 [2024-06-09 23:02:29.233271] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.244 [2024-06-09 23:02:29.283655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.816 23:02:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:01.816 23:02:29 -- common/autotest_common.sh@852 -- # return 0 00:20:01.816 23:02:29 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:02.078 [2024-06-09 23:02:30.117118] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:02.078 [2024-06-09 23:02:30.119228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef9890 (9): Bad file descriptor 00:20:02.078 [2024-06-09 23:02:30.120228] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:02.078 [2024-06-09 23:02:30.120236] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:02.078 [2024-06-09 23:02:30.120243] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:02.078 request: 00:20:02.078 { 00:20:02.078 "name": "TLSTEST", 00:20:02.078 "trtype": "tcp", 00:20:02.078 "traddr": "10.0.0.2", 00:20:02.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.078 "adrfam": "ipv4", 00:20:02.078 "trsvcid": "4420", 00:20:02.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.078 "method": "bdev_nvme_attach_controller", 00:20:02.078 "req_id": 1 00:20:02.078 } 00:20:02.078 Got JSON-RPC error response 00:20:02.078 response: 00:20:02.078 { 00:20:02.078 "code": -32602, 00:20:02.078 "message": "Invalid parameters" 00:20:02.078 } 00:20:02.078 23:02:30 -- target/tls.sh@36 -- # killprocess 4123234 00:20:02.078 23:02:30 -- common/autotest_common.sh@926 -- # '[' -z 4123234 ']' 00:20:02.078 23:02:30 -- common/autotest_common.sh@930 -- # kill -0 4123234 00:20:02.078 23:02:30 -- common/autotest_common.sh@931 -- # uname 00:20:02.078 23:02:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:02.078 23:02:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4123234 00:20:02.078 23:02:30 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:02.078 23:02:30 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:02.078 23:02:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4123234' 00:20:02.078 killing process with pid 4123234 00:20:02.078 23:02:30 -- common/autotest_common.sh@945 -- # kill 4123234 00:20:02.078 Received shutdown signal, test time was about 10.000000 seconds 00:20:02.078 00:20:02.078 Latency(us) 00:20:02.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.078 =================================================================================================================== 00:20:02.078 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:02.078 23:02:30 -- common/autotest_common.sh@950 -- # wait 4123234 00:20:02.340 23:02:30 -- target/tls.sh@37 -- # return 1 00:20:02.340 23:02:30 -- common/autotest_common.sh@643 -- # es=1 00:20:02.340 23:02:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:02.340 23:02:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:02.340 23:02:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:02.340 23:02:30 -- target/tls.sh@167 -- # killprocess 4117569 00:20:02.340 23:02:30 -- common/autotest_common.sh@926 -- # '[' -z 4117569 ']' 00:20:02.340 23:02:30 -- common/autotest_common.sh@930 -- # kill -0 4117569 00:20:02.340 23:02:30 -- common/autotest_common.sh@931 -- # uname 00:20:02.340 23:02:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:02.340 23:02:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4117569 00:20:02.340 23:02:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:02.340 23:02:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:02.340 23:02:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4117569' 00:20:02.340 killing process with pid 4117569 00:20:02.340 23:02:30 -- common/autotest_common.sh@945 -- # kill 4117569 00:20:02.340 23:02:30 -- common/autotest_common.sh@950 -- # wait 4117569 00:20:02.340 23:02:30 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:20:02.340 23:02:30 -- target/tls.sh@49 -- # local key hash crc 00:20:02.340 23:02:30 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:02.340 23:02:30 -- target/tls.sh@51 -- # hash=02 00:20:02.340 23:02:30 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:20:02.340 23:02:30 -- target/tls.sh@52 -- # head -c 4 00:20:02.340 23:02:30 -- target/tls.sh@52 -- # gzip -1 -c 00:20:02.340 23:02:30 -- target/tls.sh@52 -- # tail -c8 00:20:02.340 23:02:30 -- target/tls.sh@52 -- # crc='�e�'\''' 00:20:02.340 23:02:30 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:02.340 23:02:30 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:20:02.340 23:02:30 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:02.340 23:02:30 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:02.340 23:02:30 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:02.340 23:02:30 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:02.340 23:02:30 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:02.602 23:02:30 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:20:02.602 23:02:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:02.602 23:02:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:02.602 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:20:02.602 23:02:30 -- nvmf/common.sh@469 -- # nvmfpid=4123464 00:20:02.602 23:02:30 -- nvmf/common.sh@470 -- # waitforlisten 4123464 00:20:02.602 23:02:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:02.602 23:02:30 -- common/autotest_common.sh@819 -- # '[' -z 4123464 ']' 00:20:02.602 23:02:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.602 23:02:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:02.602 23:02:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.602 23:02:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:02.602 23:02:30 -- common/autotest_common.sh@10 -- # set +x 00:20:02.602 [2024-06-09 23:02:30.578748] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:02.602 [2024-06-09 23:02:30.578804] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.602 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.602 [2024-06-09 23:02:30.643731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.602 [2024-06-09 23:02:30.708321] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:02.602 [2024-06-09 23:02:30.708446] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.602 [2024-06-09 23:02:30.708454] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.602 [2024-06-09 23:02:30.708462] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.602 [2024-06-09 23:02:30.708479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.212 23:02:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:03.212 23:02:31 -- common/autotest_common.sh@852 -- # return 0 00:20:03.212 23:02:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:03.212 23:02:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:03.212 23:02:31 -- common/autotest_common.sh@10 -- # set +x 00:20:03.212 23:02:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.212 23:02:31 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:03.212 23:02:31 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:03.212 23:02:31 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:03.473 [2024-06-09 23:02:31.503350] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.473 23:02:31 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:03.734 23:02:31 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:03.734 [2024-06-09 23:02:31.800096] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.734 [2024-06-09 23:02:31.800298] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.734 23:02:31 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:03.995 malloc0 00:20:03.995 23:02:31 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:03.995 23:02:32 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:04.256 23:02:32 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:04.256 23:02:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:04.256 23:02:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:04.256 23:02:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:04.256 23:02:32 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:04.256 23:02:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:04.256 23:02:32 -- target/tls.sh@28 -- # bdevperf_pid=4123839 00:20:04.256 23:02:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.256 23:02:32 -- target/tls.sh@31 -- # waitforlisten 4123839 /var/tmp/bdevperf.sock 00:20:04.256 23:02:32 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:04.256 23:02:32 -- common/autotest_common.sh@819 -- # '[' -z 4123839 ']' 00:20:04.256 23:02:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.256 23:02:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:04.256 23:02:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.256 23:02:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:04.256 23:02:32 -- common/autotest_common.sh@10 -- # set +x 00:20:04.256 [2024-06-09 23:02:32.315219] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:04.256 [2024-06-09 23:02:32.315286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4123839 ] 00:20:04.256 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.256 [2024-06-09 23:02:32.366006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.256 [2024-06-09 23:02:32.416784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.197 23:02:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:05.197 23:02:33 -- common/autotest_common.sh@852 -- # return 0 00:20:05.197 23:02:33 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:05.197 [2024-06-09 23:02:33.185728] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.197 TLSTESTn1 00:20:05.198 23:02:33 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:05.198 Running I/O for 10 seconds... 00:20:17.432 00:20:17.432 Latency(us) 00:20:17.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.432 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:17.432 Verification LBA range: start 0x0 length 0x2000 00:20:17.432 TLSTESTn1 : 10.03 1658.51 6.48 0.00 0.00 77099.26 7755.09 88692.05 00:20:17.432 =================================================================================================================== 00:20:17.432 Total : 1658.51 6.48 0.00 0.00 77099.26 7755.09 88692.05 00:20:17.432 0 00:20:17.432 23:02:43 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:17.432 23:02:43 -- target/tls.sh@45 -- # killprocess 4123839 00:20:17.432 23:02:43 -- common/autotest_common.sh@926 -- # '[' -z 4123839 ']' 00:20:17.432 23:02:43 -- common/autotest_common.sh@930 -- # kill -0 4123839 00:20:17.432 23:02:43 -- common/autotest_common.sh@931 -- # uname 00:20:17.432 23:02:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:17.432 23:02:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4123839 00:20:17.432 23:02:43 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:17.432 23:02:43 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:17.432 23:02:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4123839' 00:20:17.432 killing process with pid 4123839 00:20:17.432 23:02:43 -- common/autotest_common.sh@945 -- # kill 4123839 00:20:17.432 Received shutdown signal, test time was about 10.000000 seconds 00:20:17.432 00:20:17.432 Latency(us) 00:20:17.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.432 =================================================================================================================== 00:20:17.433 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.433 23:02:43 -- common/autotest_common.sh@950 -- # wait 4123839 00:20:17.433 23:02:43 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:17.433 23:02:43 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:17.433 23:02:43 -- common/autotest_common.sh@640 -- # local es=0 00:20:17.433 23:02:43 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:17.433 23:02:43 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:17.433 23:02:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.433 23:02:43 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:17.433 23:02:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.433 23:02:43 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:17.433 23:02:43 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:17.433 23:02:43 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:17.433 23:02:43 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:17.433 23:02:43 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:17.433 23:02:43 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:17.433 23:02:43 -- target/tls.sh@28 -- # bdevperf_pid=4126190 00:20:17.433 23:02:43 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:17.433 23:02:43 -- target/tls.sh@31 -- # waitforlisten 4126190 /var/tmp/bdevperf.sock 00:20:17.433 23:02:43 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:17.433 23:02:43 -- common/autotest_common.sh@819 -- # '[' -z 4126190 ']' 00:20:17.433 23:02:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.433 23:02:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:17.433 23:02:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.433 23:02:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:17.433 23:02:43 -- common/autotest_common.sh@10 -- # set +x 00:20:17.433 [2024-06-09 23:02:43.672369] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:17.433 [2024-06-09 23:02:43.672430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4126190 ] 00:20:17.433 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.433 [2024-06-09 23:02:43.722325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.433 [2024-06-09 23:02:43.772499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.433 23:02:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:17.433 23:02:44 -- common/autotest_common.sh@852 -- # return 0 00:20:17.433 23:02:44 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:17.433 [2024-06-09 23:02:44.569108] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.433 [2024-06-09 23:02:44.569137] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:17.433 request: 00:20:17.433 { 00:20:17.433 "name": "TLSTEST", 00:20:17.433 "trtype": "tcp", 00:20:17.433 "traddr": "10.0.0.2", 00:20:17.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.433 "adrfam": "ipv4", 00:20:17.433 "trsvcid": "4420", 00:20:17.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.433 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:17.433 "method": "bdev_nvme_attach_controller", 00:20:17.433 "req_id": 1 00:20:17.433 } 00:20:17.433 Got JSON-RPC error response 00:20:17.433 response: 00:20:17.433 { 00:20:17.433 "code": -22, 00:20:17.433 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:17.433 } 00:20:17.433 23:02:44 -- target/tls.sh@36 -- # killprocess 4126190 00:20:17.433 23:02:44 -- common/autotest_common.sh@926 -- # '[' -z 4126190 ']' 00:20:17.433 23:02:44 -- common/autotest_common.sh@930 -- # kill -0 4126190 00:20:17.433 23:02:44 -- common/autotest_common.sh@931 -- # uname 00:20:17.433 23:02:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:17.433 23:02:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4126190 00:20:17.433 23:02:44 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:17.433 23:02:44 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:17.433 23:02:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4126190' 00:20:17.433 killing process with pid 4126190 00:20:17.433 23:02:44 -- common/autotest_common.sh@945 -- # kill 4126190 00:20:17.433 Received shutdown signal, test time was about 10.000000 seconds 00:20:17.433 00:20:17.433 Latency(us) 00:20:17.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.433 =================================================================================================================== 00:20:17.433 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:17.433 23:02:44 -- common/autotest_common.sh@950 -- # wait 4126190 00:20:17.433 23:02:44 -- target/tls.sh@37 -- # return 1 00:20:17.433 23:02:44 -- common/autotest_common.sh@643 -- # es=1 00:20:17.433 23:02:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:17.433 23:02:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:17.433 23:02:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:17.433 23:02:44 -- target/tls.sh@183 -- # killprocess 4123464 00:20:17.433 23:02:44 -- common/autotest_common.sh@926 -- # '[' -z 4123464 ']' 00:20:17.433 23:02:44 -- common/autotest_common.sh@930 -- # kill -0 4123464 00:20:17.433 23:02:44 -- common/autotest_common.sh@931 -- # uname 00:20:17.433 23:02:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:17.433 23:02:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4123464 00:20:17.433 23:02:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:17.433 23:02:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:17.433 23:02:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4123464' 00:20:17.433 killing process with pid 4123464 00:20:17.433 23:02:44 -- common/autotest_common.sh@945 -- # kill 4123464 00:20:17.433 23:02:44 -- common/autotest_common.sh@950 -- # wait 4123464 00:20:17.433 23:02:44 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:17.433 23:02:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:17.433 23:02:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:17.433 23:02:44 -- common/autotest_common.sh@10 -- # set +x 00:20:17.433 23:02:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:17.433 23:02:44 -- nvmf/common.sh@469 -- # nvmfpid=4126486 00:20:17.433 23:02:44 -- nvmf/common.sh@470 -- # waitforlisten 4126486 00:20:17.433 23:02:44 -- common/autotest_common.sh@819 -- # '[' -z 4126486 ']' 00:20:17.433 23:02:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.433 23:02:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:17.433 23:02:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.433 23:02:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:17.433 23:02:44 -- common/autotest_common.sh@10 -- # set +x 00:20:17.433 [2024-06-09 23:02:44.972011] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:17.433 [2024-06-09 23:02:44.972064] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.433 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.433 [2024-06-09 23:02:45.033238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.433 [2024-06-09 23:02:45.094342] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:17.433 [2024-06-09 23:02:45.094466] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.433 [2024-06-09 23:02:45.094476] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.433 [2024-06-09 23:02:45.094483] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.433 [2024-06-09 23:02:45.094501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.695 23:02:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:17.695 23:02:45 -- common/autotest_common.sh@852 -- # return 0 00:20:17.695 23:02:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:17.695 23:02:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:17.695 23:02:45 -- common/autotest_common.sh@10 -- # set +x 00:20:17.695 23:02:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.695 23:02:45 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:17.695 23:02:45 -- common/autotest_common.sh@640 -- # local es=0 00:20:17.695 23:02:45 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:17.695 23:02:45 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:20:17.695 23:02:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.695 23:02:45 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:20:17.695 23:02:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.695 23:02:45 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:17.695 23:02:45 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:17.695 23:02:45 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:17.956 [2024-06-09 23:02:45.921179] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.956 23:02:45 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:17.956 23:02:46 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:18.217 [2024-06-09 23:02:46.193865] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.217 [2024-06-09 23:02:46.194061] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.217 23:02:46 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:18.217 malloc0 00:20:18.217 23:02:46 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:18.478 23:02:46 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:18.478 [2024-06-09 23:02:46.605614] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:18.478 [2024-06-09 23:02:46.605636] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:18.478 [2024-06-09 23:02:46.605654] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:18.478 request: 00:20:18.478 { 00:20:18.478 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.479 "host": "nqn.2016-06.io.spdk:host1", 00:20:18.479 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:18.479 "method": "nvmf_subsystem_add_host", 00:20:18.479 "req_id": 1 00:20:18.479 } 00:20:18.479 Got JSON-RPC error response 00:20:18.479 response: 00:20:18.479 { 00:20:18.479 "code": -32603, 00:20:18.479 "message": "Internal error" 00:20:18.479 } 00:20:18.479 23:02:46 -- common/autotest_common.sh@643 -- # es=1 00:20:18.479 23:02:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:18.479 23:02:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:18.479 23:02:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:18.479 23:02:46 -- target/tls.sh@189 -- # killprocess 4126486 00:20:18.479 23:02:46 -- common/autotest_common.sh@926 -- # '[' -z 4126486 ']' 00:20:18.479 23:02:46 -- common/autotest_common.sh@930 -- # kill -0 4126486 00:20:18.479 23:02:46 -- common/autotest_common.sh@931 -- # uname 00:20:18.479 23:02:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:18.479 23:02:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4126486 00:20:18.739 23:02:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:18.739 23:02:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:18.739 23:02:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4126486' 00:20:18.739 killing process with pid 4126486 00:20:18.739 23:02:46 -- common/autotest_common.sh@945 -- # kill 4126486 00:20:18.739 23:02:46 -- common/autotest_common.sh@950 -- # wait 4126486 00:20:18.739 23:02:46 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:18.739 23:02:46 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:18.740 23:02:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:18.740 23:02:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:18.740 23:02:46 -- common/autotest_common.sh@10 -- # set +x 00:20:18.740 23:02:46 -- nvmf/common.sh@469 -- # nvmfpid=4126915 00:20:18.740 23:02:46 -- nvmf/common.sh@470 -- # waitforlisten 4126915 00:20:18.740 23:02:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:18.740 23:02:46 -- common/autotest_common.sh@819 -- # '[' -z 4126915 ']' 00:20:18.740 23:02:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.740 23:02:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:18.740 23:02:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.740 23:02:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:18.740 23:02:46 -- common/autotest_common.sh@10 -- # set +x 00:20:18.740 [2024-06-09 23:02:46.870482] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:18.740 [2024-06-09 23:02:46.870536] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.740 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.001 [2024-06-09 23:02:46.936158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.001 [2024-06-09 23:02:46.997704] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:19.001 [2024-06-09 23:02:46.997826] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.001 [2024-06-09 23:02:46.997834] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.001 [2024-06-09 23:02:46.997843] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.001 [2024-06-09 23:02:46.997863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.573 23:02:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:19.573 23:02:47 -- common/autotest_common.sh@852 -- # return 0 00:20:19.573 23:02:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:19.573 23:02:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:19.573 23:02:47 -- common/autotest_common.sh@10 -- # set +x 00:20:19.573 23:02:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.573 23:02:47 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:19.573 23:02:47 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:19.573 23:02:47 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:19.834 [2024-06-09 23:02:47.800483] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.834 23:02:47 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:19.834 23:02:47 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:20.095 [2024-06-09 23:02:48.085195] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:20.095 [2024-06-09 23:02:48.085411] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.095 23:02:48 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:20.095 malloc0 00:20:20.095 23:02:48 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:20.357 23:02:48 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:20.619 23:02:48 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.619 23:02:48 -- target/tls.sh@197 -- # bdevperf_pid=4127277 00:20:20.619 23:02:48 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.619 23:02:48 -- target/tls.sh@200 -- # waitforlisten 4127277 /var/tmp/bdevperf.sock 00:20:20.619 23:02:48 -- common/autotest_common.sh@819 -- # '[' -z 4127277 ']' 00:20:20.619 23:02:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.619 23:02:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:20.619 23:02:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.619 23:02:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:20.619 23:02:48 -- common/autotest_common.sh@10 -- # set +x 00:20:20.619 [2024-06-09 23:02:48.561634] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:20.619 [2024-06-09 23:02:48.561689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4127277 ] 00:20:20.619 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.619 [2024-06-09 23:02:48.611312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.619 [2024-06-09 23:02:48.661872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.193 23:02:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:21.193 23:02:49 -- common/autotest_common.sh@852 -- # return 0 00:20:21.193 23:02:49 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:21.454 [2024-06-09 23:02:49.474995] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.454 TLSTESTn1 00:20:21.454 23:02:49 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:21.716 23:02:49 -- target/tls.sh@205 -- # tgtconf='{ 00:20:21.717 "subsystems": [ 00:20:21.717 { 00:20:21.717 "subsystem": "iobuf", 00:20:21.717 "config": [ 00:20:21.717 { 00:20:21.717 "method": "iobuf_set_options", 00:20:21.717 "params": { 00:20:21.717 "small_pool_count": 8192, 00:20:21.717 "large_pool_count": 1024, 00:20:21.717 "small_bufsize": 8192, 00:20:21.717 "large_bufsize": 135168 00:20:21.717 } 00:20:21.717 } 00:20:21.717 ] 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "subsystem": "sock", 00:20:21.717 "config": [ 00:20:21.717 { 00:20:21.717 "method": "sock_impl_set_options", 00:20:21.717 "params": { 00:20:21.717 "impl_name": "posix", 00:20:21.717 "recv_buf_size": 2097152, 00:20:21.717 "send_buf_size": 2097152, 00:20:21.717 "enable_recv_pipe": true, 00:20:21.717 "enable_quickack": false, 00:20:21.717 "enable_placement_id": 0, 00:20:21.717 "enable_zerocopy_send_server": true, 00:20:21.717 "enable_zerocopy_send_client": false, 00:20:21.717 "zerocopy_threshold": 0, 00:20:21.717 "tls_version": 0, 00:20:21.717 "enable_ktls": false 00:20:21.717 } 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "method": "sock_impl_set_options", 00:20:21.717 "params": { 00:20:21.717 "impl_name": "ssl", 00:20:21.717 "recv_buf_size": 4096, 00:20:21.717 "send_buf_size": 4096, 00:20:21.717 "enable_recv_pipe": true, 00:20:21.717 "enable_quickack": false, 00:20:21.717 "enable_placement_id": 0, 00:20:21.717 "enable_zerocopy_send_server": true, 00:20:21.717 "enable_zerocopy_send_client": false, 00:20:21.717 "zerocopy_threshold": 0, 00:20:21.717 "tls_version": 0, 00:20:21.717 "enable_ktls": false 00:20:21.717 } 00:20:21.717 } 00:20:21.717 ] 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "subsystem": "vmd", 00:20:21.717 "config": [] 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "subsystem": "accel", 00:20:21.717 "config": [ 00:20:21.717 { 00:20:21.717 "method": "accel_set_options", 00:20:21.717 "params": { 00:20:21.717 "small_cache_size": 128, 00:20:21.717 "large_cache_size": 16, 00:20:21.717 "task_count": 2048, 00:20:21.717 "sequence_count": 2048, 00:20:21.717 "buf_count": 2048 00:20:21.717 } 00:20:21.717 } 00:20:21.717 ] 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "subsystem": "bdev", 00:20:21.717 "config": [ 00:20:21.717 { 00:20:21.717 "method": "bdev_set_options", 00:20:21.717 "params": { 00:20:21.717 "bdev_io_pool_size": 65535, 00:20:21.717 "bdev_io_cache_size": 256, 00:20:21.717 "bdev_auto_examine": true, 00:20:21.717 "iobuf_small_cache_size": 128, 00:20:21.717 "iobuf_large_cache_size": 16 00:20:21.717 } 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "method": "bdev_raid_set_options", 00:20:21.717 "params": { 00:20:21.717 "process_window_size_kb": 1024 00:20:21.717 } 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "method": "bdev_iscsi_set_options", 00:20:21.717 "params": { 00:20:21.717 "timeout_sec": 30 00:20:21.717 } 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "method": "bdev_nvme_set_options", 00:20:21.717 "params": { 00:20:21.717 "action_on_timeout": "none", 00:20:21.717 "timeout_us": 0, 00:20:21.717 "timeout_admin_us": 0, 00:20:21.717 "keep_alive_timeout_ms": 10000, 00:20:21.717 "transport_retry_count": 4, 00:20:21.717 "arbitration_burst": 0, 00:20:21.717 "low_priority_weight": 0, 00:20:21.717 "medium_priority_weight": 0, 00:20:21.717 "high_priority_weight": 0, 00:20:21.717 "nvme_adminq_poll_period_us": 10000, 00:20:21.717 "nvme_ioq_poll_period_us": 0, 00:20:21.717 "io_queue_requests": 0, 00:20:21.717 "delay_cmd_submit": true, 00:20:21.717 "bdev_retry_count": 3, 00:20:21.717 "transport_ack_timeout": 0, 00:20:21.717 "ctrlr_loss_timeout_sec": 0, 00:20:21.717 "reconnect_delay_sec": 0, 00:20:21.717 "fast_io_fail_timeout_sec": 0, 00:20:21.717 "generate_uuids": false, 00:20:21.717 "transport_tos": 0, 00:20:21.717 "io_path_stat": false, 00:20:21.717 "allow_accel_sequence": false 00:20:21.717 } 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "method": "bdev_nvme_set_hotplug", 00:20:21.717 "params": { 00:20:21.717 "period_us": 100000, 00:20:21.717 "enable": false 00:20:21.717 } 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "method": "bdev_malloc_create", 00:20:21.717 "params": { 00:20:21.717 "name": "malloc0", 00:20:21.717 "num_blocks": 8192, 00:20:21.717 "block_size": 4096, 00:20:21.717 "physical_block_size": 4096, 00:20:21.717 "uuid": "18af2088-b028-4204-9c35-ad7e12fb5686", 00:20:21.717 "optimal_io_boundary": 0 00:20:21.717 } 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "method": "bdev_wait_for_examine" 00:20:21.717 } 00:20:21.717 ] 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "subsystem": "nbd", 00:20:21.717 "config": [] 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "subsystem": "scheduler", 00:20:21.717 "config": [ 00:20:21.717 { 00:20:21.717 "method": "framework_set_scheduler", 00:20:21.717 "params": { 00:20:21.717 "name": "static" 00:20:21.717 } 00:20:21.717 } 00:20:21.717 ] 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "subsystem": "nvmf", 00:20:21.717 "config": [ 00:20:21.717 { 00:20:21.717 "method": "nvmf_set_config", 00:20:21.717 "params": { 00:20:21.717 "discovery_filter": "match_any", 00:20:21.717 "admin_cmd_passthru": { 00:20:21.717 "identify_ctrlr": false 00:20:21.717 } 00:20:21.717 } 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "method": "nvmf_set_max_subsystems", 00:20:21.717 "params": { 00:20:21.717 "max_subsystems": 1024 00:20:21.717 } 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "method": "nvmf_set_crdt", 00:20:21.717 "params": { 00:20:21.717 "crdt1": 0, 00:20:21.717 "crdt2": 0, 00:20:21.717 "crdt3": 0 00:20:21.717 } 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "method": "nvmf_create_transport", 00:20:21.717 "params": { 00:20:21.717 "trtype": "TCP", 00:20:21.717 "max_queue_depth": 128, 00:20:21.717 "max_io_qpairs_per_ctrlr": 127, 00:20:21.717 "in_capsule_data_size": 4096, 00:20:21.717 "max_io_size": 131072, 00:20:21.717 "io_unit_size": 131072, 00:20:21.717 "max_aq_depth": 128, 00:20:21.717 "num_shared_buffers": 511, 00:20:21.717 "buf_cache_size": 4294967295, 00:20:21.717 "dif_insert_or_strip": false, 00:20:21.717 "zcopy": false, 00:20:21.717 "c2h_success": false, 00:20:21.717 "sock_priority": 0, 00:20:21.717 "abort_timeout_sec": 1 00:20:21.717 } 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "method": "nvmf_create_subsystem", 00:20:21.717 "params": { 00:20:21.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.717 "allow_any_host": false, 00:20:21.717 "serial_number": "SPDK00000000000001", 00:20:21.717 "model_number": "SPDK bdev Controller", 00:20:21.717 "max_namespaces": 10, 00:20:21.717 "min_cntlid": 1, 00:20:21.717 "max_cntlid": 65519, 00:20:21.717 "ana_reporting": false 00:20:21.717 } 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "method": "nvmf_subsystem_add_host", 00:20:21.717 "params": { 00:20:21.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.717 "host": "nqn.2016-06.io.spdk:host1", 00:20:21.717 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:21.717 } 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "method": "nvmf_subsystem_add_ns", 00:20:21.717 "params": { 00:20:21.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.717 "namespace": { 00:20:21.717 "nsid": 1, 00:20:21.717 "bdev_name": "malloc0", 00:20:21.717 "nguid": "18AF2088B02842049C35AD7E12FB5686", 00:20:21.717 "uuid": "18af2088-b028-4204-9c35-ad7e12fb5686" 00:20:21.717 } 00:20:21.717 } 00:20:21.717 }, 00:20:21.717 { 00:20:21.717 "method": "nvmf_subsystem_add_listener", 00:20:21.718 "params": { 00:20:21.718 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.718 "listen_address": { 00:20:21.718 "trtype": "TCP", 00:20:21.718 "adrfam": "IPv4", 00:20:21.718 "traddr": "10.0.0.2", 00:20:21.718 "trsvcid": "4420" 00:20:21.718 }, 00:20:21.718 "secure_channel": true 00:20:21.718 } 00:20:21.718 } 00:20:21.718 ] 00:20:21.718 } 00:20:21.718 ] 00:20:21.718 }' 00:20:21.718 23:02:49 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:21.979 23:02:50 -- target/tls.sh@206 -- # bdevperfconf='{ 00:20:21.979 "subsystems": [ 00:20:21.979 { 00:20:21.979 "subsystem": "iobuf", 00:20:21.979 "config": [ 00:20:21.979 { 00:20:21.979 "method": "iobuf_set_options", 00:20:21.979 "params": { 00:20:21.979 "small_pool_count": 8192, 00:20:21.979 "large_pool_count": 1024, 00:20:21.979 "small_bufsize": 8192, 00:20:21.979 "large_bufsize": 135168 00:20:21.979 } 00:20:21.979 } 00:20:21.979 ] 00:20:21.979 }, 00:20:21.979 { 00:20:21.979 "subsystem": "sock", 00:20:21.979 "config": [ 00:20:21.979 { 00:20:21.979 "method": "sock_impl_set_options", 00:20:21.979 "params": { 00:20:21.979 "impl_name": "posix", 00:20:21.979 "recv_buf_size": 2097152, 00:20:21.979 "send_buf_size": 2097152, 00:20:21.979 "enable_recv_pipe": true, 00:20:21.979 "enable_quickack": false, 00:20:21.979 "enable_placement_id": 0, 00:20:21.979 "enable_zerocopy_send_server": true, 00:20:21.979 "enable_zerocopy_send_client": false, 00:20:21.979 "zerocopy_threshold": 0, 00:20:21.979 "tls_version": 0, 00:20:21.979 "enable_ktls": false 00:20:21.979 } 00:20:21.979 }, 00:20:21.979 { 00:20:21.979 "method": "sock_impl_set_options", 00:20:21.979 "params": { 00:20:21.979 "impl_name": "ssl", 00:20:21.979 "recv_buf_size": 4096, 00:20:21.979 "send_buf_size": 4096, 00:20:21.979 "enable_recv_pipe": true, 00:20:21.979 "enable_quickack": false, 00:20:21.979 "enable_placement_id": 0, 00:20:21.979 "enable_zerocopy_send_server": true, 00:20:21.979 "enable_zerocopy_send_client": false, 00:20:21.979 "zerocopy_threshold": 0, 00:20:21.979 "tls_version": 0, 00:20:21.979 "enable_ktls": false 00:20:21.979 } 00:20:21.979 } 00:20:21.979 ] 00:20:21.979 }, 00:20:21.979 { 00:20:21.979 "subsystem": "vmd", 00:20:21.979 "config": [] 00:20:21.979 }, 00:20:21.979 { 00:20:21.979 "subsystem": "accel", 00:20:21.979 "config": [ 00:20:21.979 { 00:20:21.979 "method": "accel_set_options", 00:20:21.979 "params": { 00:20:21.979 "small_cache_size": 128, 00:20:21.979 "large_cache_size": 16, 00:20:21.979 "task_count": 2048, 00:20:21.979 "sequence_count": 2048, 00:20:21.979 "buf_count": 2048 00:20:21.979 } 00:20:21.979 } 00:20:21.979 ] 00:20:21.979 }, 00:20:21.979 { 00:20:21.979 "subsystem": "bdev", 00:20:21.979 "config": [ 00:20:21.979 { 00:20:21.979 "method": "bdev_set_options", 00:20:21.979 "params": { 00:20:21.979 "bdev_io_pool_size": 65535, 00:20:21.980 "bdev_io_cache_size": 256, 00:20:21.980 "bdev_auto_examine": true, 00:20:21.980 "iobuf_small_cache_size": 128, 00:20:21.980 "iobuf_large_cache_size": 16 00:20:21.980 } 00:20:21.980 }, 00:20:21.980 { 00:20:21.980 "method": "bdev_raid_set_options", 00:20:21.980 "params": { 00:20:21.980 "process_window_size_kb": 1024 00:20:21.980 } 00:20:21.980 }, 00:20:21.980 { 00:20:21.980 "method": "bdev_iscsi_set_options", 00:20:21.980 "params": { 00:20:21.980 "timeout_sec": 30 00:20:21.980 } 00:20:21.980 }, 00:20:21.980 { 00:20:21.980 "method": "bdev_nvme_set_options", 00:20:21.980 "params": { 00:20:21.980 "action_on_timeout": "none", 00:20:21.980 "timeout_us": 0, 00:20:21.980 "timeout_admin_us": 0, 00:20:21.980 "keep_alive_timeout_ms": 10000, 00:20:21.980 "transport_retry_count": 4, 00:20:21.980 "arbitration_burst": 0, 00:20:21.980 "low_priority_weight": 0, 00:20:21.980 "medium_priority_weight": 0, 00:20:21.980 "high_priority_weight": 0, 00:20:21.980 "nvme_adminq_poll_period_us": 10000, 00:20:21.980 "nvme_ioq_poll_period_us": 0, 00:20:21.980 "io_queue_requests": 512, 00:20:21.980 "delay_cmd_submit": true, 00:20:21.980 "bdev_retry_count": 3, 00:20:21.980 "transport_ack_timeout": 0, 00:20:21.980 "ctrlr_loss_timeout_sec": 0, 00:20:21.980 "reconnect_delay_sec": 0, 00:20:21.980 "fast_io_fail_timeout_sec": 0, 00:20:21.980 "generate_uuids": false, 00:20:21.980 "transport_tos": 0, 00:20:21.980 "io_path_stat": false, 00:20:21.980 "allow_accel_sequence": false 00:20:21.980 } 00:20:21.980 }, 00:20:21.980 { 00:20:21.980 "method": "bdev_nvme_attach_controller", 00:20:21.980 "params": { 00:20:21.980 "name": "TLSTEST", 00:20:21.980 "trtype": "TCP", 00:20:21.980 "adrfam": "IPv4", 00:20:21.980 "traddr": "10.0.0.2", 00:20:21.980 "trsvcid": "4420", 00:20:21.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.980 "prchk_reftag": false, 00:20:21.980 "prchk_guard": false, 00:20:21.980 "ctrlr_loss_timeout_sec": 0, 00:20:21.980 "reconnect_delay_sec": 0, 00:20:21.980 "fast_io_fail_timeout_sec": 0, 00:20:21.980 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:21.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.980 "hdgst": false, 00:20:21.980 "ddgst": false 00:20:21.980 } 00:20:21.980 }, 00:20:21.980 { 00:20:21.980 "method": "bdev_nvme_set_hotplug", 00:20:21.980 "params": { 00:20:21.980 "period_us": 100000, 00:20:21.980 "enable": false 00:20:21.980 } 00:20:21.980 }, 00:20:21.980 { 00:20:21.980 "method": "bdev_wait_for_examine" 00:20:21.980 } 00:20:21.980 ] 00:20:21.980 }, 00:20:21.980 { 00:20:21.980 "subsystem": "nbd", 00:20:21.980 "config": [] 00:20:21.980 } 00:20:21.980 ] 00:20:21.980 }' 00:20:21.980 23:02:50 -- target/tls.sh@208 -- # killprocess 4127277 00:20:21.980 23:02:50 -- common/autotest_common.sh@926 -- # '[' -z 4127277 ']' 00:20:21.980 23:02:50 -- common/autotest_common.sh@930 -- # kill -0 4127277 00:20:21.980 23:02:50 -- common/autotest_common.sh@931 -- # uname 00:20:21.980 23:02:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:21.980 23:02:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4127277 00:20:21.980 23:02:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:21.980 23:02:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:21.980 23:02:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4127277' 00:20:21.980 killing process with pid 4127277 00:20:21.980 23:02:50 -- common/autotest_common.sh@945 -- # kill 4127277 00:20:21.980 Received shutdown signal, test time was about 10.000000 seconds 00:20:21.980 00:20:21.980 Latency(us) 00:20:21.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.980 =================================================================================================================== 00:20:21.980 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.980 23:02:50 -- common/autotest_common.sh@950 -- # wait 4127277 00:20:22.242 23:02:50 -- target/tls.sh@209 -- # killprocess 4126915 00:20:22.242 23:02:50 -- common/autotest_common.sh@926 -- # '[' -z 4126915 ']' 00:20:22.242 23:02:50 -- common/autotest_common.sh@930 -- # kill -0 4126915 00:20:22.242 23:02:50 -- common/autotest_common.sh@931 -- # uname 00:20:22.242 23:02:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:22.242 23:02:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4126915 00:20:22.242 23:02:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:22.242 23:02:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:22.242 23:02:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4126915' 00:20:22.242 killing process with pid 4126915 00:20:22.242 23:02:50 -- common/autotest_common.sh@945 -- # kill 4126915 00:20:22.242 23:02:50 -- common/autotest_common.sh@950 -- # wait 4126915 00:20:22.242 23:02:50 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:22.242 23:02:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:22.242 23:02:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:22.242 23:02:50 -- common/autotest_common.sh@10 -- # set +x 00:20:22.242 23:02:50 -- target/tls.sh@212 -- # echo '{ 00:20:22.242 "subsystems": [ 00:20:22.242 { 00:20:22.242 "subsystem": "iobuf", 00:20:22.242 "config": [ 00:20:22.242 { 00:20:22.242 "method": "iobuf_set_options", 00:20:22.242 "params": { 00:20:22.242 "small_pool_count": 8192, 00:20:22.242 "large_pool_count": 1024, 00:20:22.242 "small_bufsize": 8192, 00:20:22.242 "large_bufsize": 135168 00:20:22.242 } 00:20:22.242 } 00:20:22.242 ] 00:20:22.242 }, 00:20:22.242 { 00:20:22.242 "subsystem": "sock", 00:20:22.242 "config": [ 00:20:22.242 { 00:20:22.242 "method": "sock_impl_set_options", 00:20:22.242 "params": { 00:20:22.242 "impl_name": "posix", 00:20:22.242 "recv_buf_size": 2097152, 00:20:22.242 "send_buf_size": 2097152, 00:20:22.242 "enable_recv_pipe": true, 00:20:22.242 "enable_quickack": false, 00:20:22.242 "enable_placement_id": 0, 00:20:22.242 "enable_zerocopy_send_server": true, 00:20:22.242 "enable_zerocopy_send_client": false, 00:20:22.242 "zerocopy_threshold": 0, 00:20:22.242 "tls_version": 0, 00:20:22.242 "enable_ktls": false 00:20:22.242 } 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "method": "sock_impl_set_options", 00:20:22.243 "params": { 00:20:22.243 "impl_name": "ssl", 00:20:22.243 "recv_buf_size": 4096, 00:20:22.243 "send_buf_size": 4096, 00:20:22.243 "enable_recv_pipe": true, 00:20:22.243 "enable_quickack": false, 00:20:22.243 "enable_placement_id": 0, 00:20:22.243 "enable_zerocopy_send_server": true, 00:20:22.243 "enable_zerocopy_send_client": false, 00:20:22.243 "zerocopy_threshold": 0, 00:20:22.243 "tls_version": 0, 00:20:22.243 "enable_ktls": false 00:20:22.243 } 00:20:22.243 } 00:20:22.243 ] 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "subsystem": "vmd", 00:20:22.243 "config": [] 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "subsystem": "accel", 00:20:22.243 "config": [ 00:20:22.243 { 00:20:22.243 "method": "accel_set_options", 00:20:22.243 "params": { 00:20:22.243 "small_cache_size": 128, 00:20:22.243 "large_cache_size": 16, 00:20:22.243 "task_count": 2048, 00:20:22.243 "sequence_count": 2048, 00:20:22.243 "buf_count": 2048 00:20:22.243 } 00:20:22.243 } 00:20:22.243 ] 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "subsystem": "bdev", 00:20:22.243 "config": [ 00:20:22.243 { 00:20:22.243 "method": "bdev_set_options", 00:20:22.243 "params": { 00:20:22.243 "bdev_io_pool_size": 65535, 00:20:22.243 "bdev_io_cache_size": 256, 00:20:22.243 "bdev_auto_examine": true, 00:20:22.243 "iobuf_small_cache_size": 128, 00:20:22.243 "iobuf_large_cache_size": 16 00:20:22.243 } 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "method": "bdev_raid_set_options", 00:20:22.243 "params": { 00:20:22.243 "process_window_size_kb": 1024 00:20:22.243 } 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "method": "bdev_iscsi_set_options", 00:20:22.243 "params": { 00:20:22.243 "timeout_sec": 30 00:20:22.243 } 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "method": "bdev_nvme_set_options", 00:20:22.243 "params": { 00:20:22.243 "action_on_timeout": "none", 00:20:22.243 "timeout_us": 0, 00:20:22.243 "timeout_admin_us": 0, 00:20:22.243 "keep_alive_timeout_ms": 10000, 00:20:22.243 "transport_retry_count": 4, 00:20:22.243 "arbitration_burst": 0, 00:20:22.243 "low_priority_weight": 0, 00:20:22.243 "medium_priority_weight": 0, 00:20:22.243 "high_priority_weight": 0, 00:20:22.243 "nvme_adminq_poll_period_us": 10000, 00:20:22.243 "nvme_ioq_poll_period_us": 0, 00:20:22.243 "io_queue_requests": 0, 00:20:22.243 "delay_cmd_submit": true, 00:20:22.243 "bdev_retry_count": 3, 00:20:22.243 "transport_ack_timeout": 0, 00:20:22.243 "ctrlr_loss_timeout_sec": 0, 00:20:22.243 "reconnect_delay_sec": 0, 00:20:22.243 "fast_io_fail_timeout_sec": 0, 00:20:22.243 "generate_uuids": false, 00:20:22.243 "transport_tos": 0, 00:20:22.243 "io_path_stat": false, 00:20:22.243 "allow_accel_sequence": false 00:20:22.243 } 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "method": "bdev_nvme_set_hotplug", 00:20:22.243 "params": { 00:20:22.243 "period_us": 100000, 00:20:22.243 "enable": false 00:20:22.243 } 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "method": "bdev_malloc_create", 00:20:22.243 "params": { 00:20:22.243 "name": "malloc0", 00:20:22.243 "num_blocks": 8192, 00:20:22.243 "block_size": 4096, 00:20:22.243 "physical_block_size": 4096, 00:20:22.243 "uuid": "18af2088-b028-4204-9c35-ad7e12fb5686", 00:20:22.243 "optimal_io_boundary": 0 00:20:22.243 } 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "method": "bdev_wait_for_examine" 00:20:22.243 } 00:20:22.243 ] 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "subsystem": "nbd", 00:20:22.243 "config": [] 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "subsystem": "scheduler", 00:20:22.243 "config": [ 00:20:22.243 { 00:20:22.243 "method": "framework_set_scheduler", 00:20:22.243 "params": { 00:20:22.243 "name": "static" 00:20:22.243 } 00:20:22.243 } 00:20:22.243 ] 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "subsystem": "nvmf", 00:20:22.243 "config": [ 00:20:22.243 { 00:20:22.243 "method": "nvmf_set_config", 00:20:22.243 "params": { 00:20:22.243 "discovery_filter": "match_any", 00:20:22.243 "admin_cmd_passthru": { 00:20:22.243 "identify_ctrlr": false 00:20:22.243 } 00:20:22.243 } 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "method": "nvmf_set_max_subsystems", 00:20:22.243 "params": { 00:20:22.243 "max_subsystems": 1024 00:20:22.243 } 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "method": "nvmf_set_crdt", 00:20:22.243 "params": { 00:20:22.243 "crdt1": 0, 00:20:22.243 "crdt2": 0, 00:20:22.243 "crdt3": 0 00:20:22.243 } 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "method": "nvmf_create_transport", 00:20:22.243 "params": { 00:20:22.243 "trtype": "TCP", 00:20:22.243 "max_queue_depth": 128, 00:20:22.243 "max_io_qpairs_per_ctrlr": 127, 00:20:22.243 "in_capsule_data_size": 4096, 00:20:22.243 "max_io_size": 131072, 00:20:22.243 "io_unit_size": 131072, 00:20:22.243 "max_aq_depth": 128, 00:20:22.243 "num_shared_buffers": 511, 00:20:22.243 "buf_cache_size": 4294967295, 00:20:22.243 "dif_insert_or_strip": false, 00:20:22.243 "zcopy": false, 00:20:22.243 "c2h_success": false, 00:20:22.243 "sock_priority": 0, 00:20:22.243 "abort_timeout_sec": 1 00:20:22.243 } 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "method": "nvmf_create_subsystem", 00:20:22.243 "params": { 00:20:22.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.243 "allow_any_host": false, 00:20:22.243 "serial_number": "SPDK00000000000001", 00:20:22.243 "model_number": "SPDK bdev Controller", 00:20:22.243 "max_namespaces": 10, 00:20:22.243 "min_cntlid": 1, 00:20:22.243 "max_cntlid": 65519, 00:20:22.243 "ana_reporting": false 00:20:22.243 } 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "method": "nvmf_subsystem_add_host", 00:20:22.243 "params": { 00:20:22.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.243 "host": "nqn.2016-06.io.spdk:host1", 00:20:22.243 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:22.243 } 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "method": "nvmf_subsystem_add_ns", 00:20:22.243 "params": { 00:20:22.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.243 "namespace": { 00:20:22.243 "nsid": 1, 00:20:22.243 "bdev_name": "malloc0", 00:20:22.243 "nguid": "18AF2088B02842049C35AD7E12FB5686", 00:20:22.243 "uuid": "18af2088-b028-4204-9c35-ad7e12fb5686" 00:20:22.243 } 00:20:22.243 } 00:20:22.243 }, 00:20:22.243 { 00:20:22.243 "method": "nvmf_subsystem_add_listener", 00:20:22.243 "params": { 00:20:22.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.243 "listen_address": { 00:20:22.243 "trtype": "TCP", 00:20:22.243 "adrfam": "IPv4", 00:20:22.243 "traddr": "10.0.0.2", 00:20:22.243 "trsvcid": "4420" 00:20:22.243 }, 00:20:22.243 "secure_channel": true 00:20:22.243 } 00:20:22.243 } 00:20:22.243 ] 00:20:22.243 } 00:20:22.243 ] 00:20:22.243 }' 00:20:22.243 23:02:50 -- nvmf/common.sh@469 -- # nvmfpid=4127637 00:20:22.243 23:02:50 -- nvmf/common.sh@470 -- # waitforlisten 4127637 00:20:22.243 23:02:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:22.243 23:02:50 -- common/autotest_common.sh@819 -- # '[' -z 4127637 ']' 00:20:22.243 23:02:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.243 23:02:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:22.243 23:02:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.243 23:02:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:22.243 23:02:50 -- common/autotest_common.sh@10 -- # set +x 00:20:22.504 [2024-06-09 23:02:50.463571] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:22.504 [2024-06-09 23:02:50.463637] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.504 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.505 [2024-06-09 23:02:50.530446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.505 [2024-06-09 23:02:50.592033] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:22.505 [2024-06-09 23:02:50.592153] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.505 [2024-06-09 23:02:50.592161] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.505 [2024-06-09 23:02:50.592169] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.505 [2024-06-09 23:02:50.592187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.765 [2024-06-09 23:02:50.772873] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.765 [2024-06-09 23:02:50.804894] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:22.765 [2024-06-09 23:02:50.805107] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.338 23:02:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:23.338 23:02:51 -- common/autotest_common.sh@852 -- # return 0 00:20:23.338 23:02:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:23.338 23:02:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:23.338 23:02:51 -- common/autotest_common.sh@10 -- # set +x 00:20:23.338 23:02:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.338 23:02:51 -- target/tls.sh@216 -- # bdevperf_pid=4127693 00:20:23.338 23:02:51 -- target/tls.sh@217 -- # waitforlisten 4127693 /var/tmp/bdevperf.sock 00:20:23.338 23:02:51 -- common/autotest_common.sh@819 -- # '[' -z 4127693 ']' 00:20:23.338 23:02:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.338 23:02:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:23.338 23:02:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.338 23:02:51 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:23.338 23:02:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:23.338 23:02:51 -- common/autotest_common.sh@10 -- # set +x 00:20:23.338 23:02:51 -- target/tls.sh@213 -- # echo '{ 00:20:23.338 "subsystems": [ 00:20:23.338 { 00:20:23.338 "subsystem": "iobuf", 00:20:23.338 "config": [ 00:20:23.338 { 00:20:23.338 "method": "iobuf_set_options", 00:20:23.338 "params": { 00:20:23.338 "small_pool_count": 8192, 00:20:23.338 "large_pool_count": 1024, 00:20:23.338 "small_bufsize": 8192, 00:20:23.338 "large_bufsize": 135168 00:20:23.338 } 00:20:23.338 } 00:20:23.338 ] 00:20:23.339 }, 00:20:23.339 { 00:20:23.339 "subsystem": "sock", 00:20:23.339 "config": [ 00:20:23.339 { 00:20:23.339 "method": "sock_impl_set_options", 00:20:23.339 "params": { 00:20:23.339 "impl_name": "posix", 00:20:23.339 "recv_buf_size": 2097152, 00:20:23.339 "send_buf_size": 2097152, 00:20:23.339 "enable_recv_pipe": true, 00:20:23.339 "enable_quickack": false, 00:20:23.339 "enable_placement_id": 0, 00:20:23.339 "enable_zerocopy_send_server": true, 00:20:23.339 "enable_zerocopy_send_client": false, 00:20:23.339 "zerocopy_threshold": 0, 00:20:23.339 "tls_version": 0, 00:20:23.339 "enable_ktls": false 00:20:23.339 } 00:20:23.339 }, 00:20:23.339 { 00:20:23.339 "method": "sock_impl_set_options", 00:20:23.339 "params": { 00:20:23.339 "impl_name": "ssl", 00:20:23.339 "recv_buf_size": 4096, 00:20:23.339 "send_buf_size": 4096, 00:20:23.339 "enable_recv_pipe": true, 00:20:23.339 "enable_quickack": false, 00:20:23.339 "enable_placement_id": 0, 00:20:23.339 "enable_zerocopy_send_server": true, 00:20:23.339 "enable_zerocopy_send_client": false, 00:20:23.339 "zerocopy_threshold": 0, 00:20:23.339 "tls_version": 0, 00:20:23.339 "enable_ktls": false 00:20:23.339 } 00:20:23.339 } 00:20:23.339 ] 00:20:23.339 }, 00:20:23.339 { 00:20:23.339 "subsystem": "vmd", 00:20:23.339 "config": [] 00:20:23.339 }, 00:20:23.339 { 00:20:23.339 "subsystem": "accel", 00:20:23.339 "config": [ 00:20:23.339 { 00:20:23.339 "method": "accel_set_options", 00:20:23.339 "params": { 00:20:23.339 "small_cache_size": 128, 00:20:23.339 "large_cache_size": 16, 00:20:23.339 "task_count": 2048, 00:20:23.339 "sequence_count": 2048, 00:20:23.339 "buf_count": 2048 00:20:23.339 } 00:20:23.339 } 00:20:23.339 ] 00:20:23.339 }, 00:20:23.339 { 00:20:23.339 "subsystem": "bdev", 00:20:23.339 "config": [ 00:20:23.339 { 00:20:23.339 "method": "bdev_set_options", 00:20:23.339 "params": { 00:20:23.339 "bdev_io_pool_size": 65535, 00:20:23.339 "bdev_io_cache_size": 256, 00:20:23.339 "bdev_auto_examine": true, 00:20:23.339 "iobuf_small_cache_size": 128, 00:20:23.339 "iobuf_large_cache_size": 16 00:20:23.339 } 00:20:23.339 }, 00:20:23.339 { 00:20:23.339 "method": "bdev_raid_set_options", 00:20:23.339 "params": { 00:20:23.339 "process_window_size_kb": 1024 00:20:23.339 } 00:20:23.339 }, 00:20:23.339 { 00:20:23.339 "method": "bdev_iscsi_set_options", 00:20:23.339 "params": { 00:20:23.339 "timeout_sec": 30 00:20:23.339 } 00:20:23.339 }, 00:20:23.339 { 00:20:23.339 "method": "bdev_nvme_set_options", 00:20:23.339 "params": { 00:20:23.339 "action_on_timeout": "none", 00:20:23.339 "timeout_us": 0, 00:20:23.339 "timeout_admin_us": 0, 00:20:23.339 "keep_alive_timeout_ms": 10000, 00:20:23.339 "transport_retry_count": 4, 00:20:23.339 "arbitration_burst": 0, 00:20:23.339 "low_priority_weight": 0, 00:20:23.339 "medium_priority_weight": 0, 00:20:23.339 "high_priority_weight": 0, 00:20:23.339 "nvme_adminq_poll_period_us": 10000, 00:20:23.339 "nvme_ioq_poll_period_us": 0, 00:20:23.339 "io_queue_requests": 512, 00:20:23.339 "delay_cmd_submit": true, 00:20:23.339 "bdev_retry_count": 3, 00:20:23.339 "transport_ack_timeout": 0, 00:20:23.339 "ctrlr_loss_timeout_sec": 0, 00:20:23.339 "reconnect_delay_sec": 0, 00:20:23.339 "fast_io_fail_timeout_sec": 0, 00:20:23.339 "generate_uuids": false, 00:20:23.339 "transport_tos": 0, 00:20:23.339 "io_path_stat": false, 00:20:23.339 "allow_accel_sequence": false 00:20:23.339 } 00:20:23.339 }, 00:20:23.339 { 00:20:23.339 "method": "bdev_nvme_attach_controller", 00:20:23.339 "params": { 00:20:23.339 "name": "TLSTEST", 00:20:23.339 "trtype": "TCP", 00:20:23.339 "adrfam": "IPv4", 00:20:23.339 "traddr": "10.0.0.2", 00:20:23.339 "trsvcid": "4420", 00:20:23.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.339 "prchk_reftag": false, 00:20:23.339 "prchk_guard": false, 00:20:23.339 "ctrlr_loss_timeout_sec": 0, 00:20:23.339 "reconnect_delay_sec": 0, 00:20:23.339 "fast_io_fail_timeout_sec": 0, 00:20:23.339 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:23.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.339 "hdgst": false, 00:20:23.339 "ddgst": false 00:20:23.339 } 00:20:23.339 }, 00:20:23.339 { 00:20:23.339 "method": "bdev_nvme_set_hotplug", 00:20:23.339 "params": { 00:20:23.339 "period_us": 100000, 00:20:23.339 "enable": false 00:20:23.339 } 00:20:23.339 }, 00:20:23.339 { 00:20:23.339 "method": "bdev_wait_for_examine" 00:20:23.339 } 00:20:23.339 ] 00:20:23.339 }, 00:20:23.339 { 00:20:23.339 "subsystem": "nbd", 00:20:23.339 "config": [] 00:20:23.339 } 00:20:23.339 ] 00:20:23.339 }' 00:20:23.339 [2024-06-09 23:02:51.294492] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:23.339 [2024-06-09 23:02:51.294590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4127693 ] 00:20:23.339 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.339 [2024-06-09 23:02:51.350487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.339 [2024-06-09 23:02:51.400964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.339 [2024-06-09 23:02:51.516646] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.912 23:02:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:23.912 23:02:52 -- common/autotest_common.sh@852 -- # return 0 00:20:23.912 23:02:52 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:24.172 Running I/O for 10 seconds... 00:20:34.213 00:20:34.213 Latency(us) 00:20:34.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.213 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:34.213 Verification LBA range: start 0x0 length 0x2000 00:20:34.213 TLSTESTn1 : 10.05 1448.24 5.66 0.00 0.00 88241.06 7973.55 94808.75 00:20:34.213 =================================================================================================================== 00:20:34.213 Total : 1448.24 5.66 0.00 0.00 88241.06 7973.55 94808.75 00:20:34.213 0 00:20:34.213 23:03:02 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:34.213 23:03:02 -- target/tls.sh@223 -- # killprocess 4127693 00:20:34.213 23:03:02 -- common/autotest_common.sh@926 -- # '[' -z 4127693 ']' 00:20:34.213 23:03:02 -- common/autotest_common.sh@930 -- # kill -0 4127693 00:20:34.213 23:03:02 -- common/autotest_common.sh@931 -- # uname 00:20:34.213 23:03:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:34.213 23:03:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4127693 00:20:34.213 23:03:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:34.213 23:03:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:34.213 23:03:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4127693' 00:20:34.213 killing process with pid 4127693 00:20:34.213 23:03:02 -- common/autotest_common.sh@945 -- # kill 4127693 00:20:34.213 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.213 00:20:34.213 Latency(us) 00:20:34.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.213 =================================================================================================================== 00:20:34.213 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.213 23:03:02 -- common/autotest_common.sh@950 -- # wait 4127693 00:20:34.473 23:03:02 -- target/tls.sh@224 -- # killprocess 4127637 00:20:34.473 23:03:02 -- common/autotest_common.sh@926 -- # '[' -z 4127637 ']' 00:20:34.473 23:03:02 -- common/autotest_common.sh@930 -- # kill -0 4127637 00:20:34.473 23:03:02 -- common/autotest_common.sh@931 -- # uname 00:20:34.473 23:03:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:34.473 23:03:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4127637 00:20:34.473 23:03:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:34.473 23:03:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:34.474 23:03:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4127637' 00:20:34.474 killing process with pid 4127637 00:20:34.474 23:03:02 -- common/autotest_common.sh@945 -- # kill 4127637 00:20:34.474 23:03:02 -- common/autotest_common.sh@950 -- # wait 4127637 00:20:34.474 23:03:02 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:20:34.474 23:03:02 -- target/tls.sh@227 -- # cleanup 00:20:34.474 23:03:02 -- target/tls.sh@15 -- # process_shm --id 0 00:20:34.474 23:03:02 -- common/autotest_common.sh@796 -- # type=--id 00:20:34.474 23:03:02 -- common/autotest_common.sh@797 -- # id=0 00:20:34.474 23:03:02 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:20:34.474 23:03:02 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:34.474 23:03:02 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:20:34.474 23:03:02 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:20:34.474 23:03:02 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:20:34.474 23:03:02 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:34.474 nvmf_trace.0 00:20:34.735 23:03:02 -- common/autotest_common.sh@811 -- # return 0 00:20:34.735 23:03:02 -- target/tls.sh@16 -- # killprocess 4127693 00:20:34.735 23:03:02 -- common/autotest_common.sh@926 -- # '[' -z 4127693 ']' 00:20:34.735 23:03:02 -- common/autotest_common.sh@930 -- # kill -0 4127693 00:20:34.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (4127693) - No such process 00:20:34.735 23:03:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 4127693 is not found' 00:20:34.735 Process with pid 4127693 is not found 00:20:34.735 23:03:02 -- target/tls.sh@17 -- # nvmftestfini 00:20:34.735 23:03:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:34.735 23:03:02 -- nvmf/common.sh@116 -- # sync 00:20:34.735 23:03:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:34.735 23:03:02 -- nvmf/common.sh@119 -- # set +e 00:20:34.735 23:03:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:34.735 23:03:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:34.735 rmmod nvme_tcp 00:20:34.735 rmmod nvme_fabrics 00:20:34.735 rmmod nvme_keyring 00:20:34.735 23:03:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:34.735 23:03:02 -- nvmf/common.sh@123 -- # set -e 00:20:34.735 23:03:02 -- nvmf/common.sh@124 -- # return 0 00:20:34.735 23:03:02 -- nvmf/common.sh@477 -- # '[' -n 4127637 ']' 00:20:34.735 23:03:02 -- nvmf/common.sh@478 -- # killprocess 4127637 00:20:34.735 23:03:02 -- common/autotest_common.sh@926 -- # '[' -z 4127637 ']' 00:20:34.735 23:03:02 -- common/autotest_common.sh@930 -- # kill -0 4127637 00:20:34.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (4127637) - No such process 00:20:34.735 23:03:02 -- common/autotest_common.sh@953 -- # echo 'Process with pid 4127637 is not found' 00:20:34.735 Process with pid 4127637 is not found 00:20:34.735 23:03:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:34.735 23:03:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:34.735 23:03:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:34.735 23:03:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:34.735 23:03:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:34.735 23:03:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.735 23:03:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.735 23:03:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.651 23:03:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:36.651 23:03:04 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:36.651 00:20:36.651 real 1m11.365s 00:20:36.651 user 1m42.278s 00:20:36.651 sys 0m28.372s 00:20:36.651 23:03:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:36.651 23:03:04 -- common/autotest_common.sh@10 -- # set +x 00:20:36.651 ************************************ 00:20:36.651 END TEST nvmf_tls 00:20:36.651 ************************************ 00:20:36.912 23:03:04 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:36.912 23:03:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:36.912 23:03:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:36.912 23:03:04 -- common/autotest_common.sh@10 -- # set +x 00:20:36.912 ************************************ 00:20:36.912 START TEST nvmf_fips 00:20:36.912 ************************************ 00:20:36.912 23:03:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:36.912 * Looking for test storage... 00:20:36.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:36.912 23:03:04 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.912 23:03:04 -- nvmf/common.sh@7 -- # uname -s 00:20:36.912 23:03:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.912 23:03:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.912 23:03:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.912 23:03:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.912 23:03:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.912 23:03:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.912 23:03:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.912 23:03:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.912 23:03:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.912 23:03:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.912 23:03:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:36.912 23:03:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:36.912 23:03:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.912 23:03:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.912 23:03:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.912 23:03:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:36.912 23:03:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.912 23:03:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.912 23:03:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.912 23:03:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.912 23:03:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.912 23:03:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.912 23:03:04 -- paths/export.sh@5 -- # export PATH 00:20:36.912 23:03:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.912 23:03:04 -- nvmf/common.sh@46 -- # : 0 00:20:36.913 23:03:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:36.913 23:03:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:36.913 23:03:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:36.913 23:03:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.913 23:03:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.913 23:03:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:36.913 23:03:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:36.913 23:03:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:36.913 23:03:04 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:36.913 23:03:04 -- fips/fips.sh@89 -- # check_openssl_version 00:20:36.913 23:03:04 -- fips/fips.sh@83 -- # local target=3.0.0 00:20:36.913 23:03:04 -- fips/fips.sh@85 -- # openssl version 00:20:36.913 23:03:04 -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:36.913 23:03:04 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:36.913 23:03:04 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:36.913 23:03:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:36.913 23:03:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:36.913 23:03:04 -- scripts/common.sh@335 -- # IFS=.-: 00:20:36.913 23:03:04 -- scripts/common.sh@335 -- # read -ra ver1 00:20:36.913 23:03:04 -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.913 23:03:04 -- scripts/common.sh@336 -- # read -ra ver2 00:20:36.913 23:03:04 -- scripts/common.sh@337 -- # local 'op=>=' 00:20:36.913 23:03:04 -- scripts/common.sh@339 -- # ver1_l=3 00:20:36.913 23:03:04 -- scripts/common.sh@340 -- # ver2_l=3 00:20:36.913 23:03:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:36.913 23:03:04 -- scripts/common.sh@343 -- # case "$op" in 00:20:36.913 23:03:04 -- scripts/common.sh@347 -- # : 1 00:20:36.913 23:03:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:36.913 23:03:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.913 23:03:04 -- scripts/common.sh@364 -- # decimal 3 00:20:36.913 23:03:04 -- scripts/common.sh@352 -- # local d=3 00:20:36.913 23:03:04 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:36.913 23:03:04 -- scripts/common.sh@354 -- # echo 3 00:20:36.913 23:03:04 -- scripts/common.sh@364 -- # ver1[v]=3 00:20:36.913 23:03:04 -- scripts/common.sh@365 -- # decimal 3 00:20:36.913 23:03:04 -- scripts/common.sh@352 -- # local d=3 00:20:36.913 23:03:04 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:36.913 23:03:04 -- scripts/common.sh@354 -- # echo 3 00:20:36.913 23:03:04 -- scripts/common.sh@365 -- # ver2[v]=3 00:20:36.913 23:03:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:36.913 23:03:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:36.913 23:03:04 -- scripts/common.sh@363 -- # (( v++ )) 00:20:36.913 23:03:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.913 23:03:04 -- scripts/common.sh@364 -- # decimal 0 00:20:36.913 23:03:04 -- scripts/common.sh@352 -- # local d=0 00:20:36.913 23:03:04 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:36.913 23:03:04 -- scripts/common.sh@354 -- # echo 0 00:20:36.913 23:03:04 -- scripts/common.sh@364 -- # ver1[v]=0 00:20:36.913 23:03:05 -- scripts/common.sh@365 -- # decimal 0 00:20:36.913 23:03:05 -- scripts/common.sh@352 -- # local d=0 00:20:36.913 23:03:05 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:36.913 23:03:05 -- scripts/common.sh@354 -- # echo 0 00:20:36.913 23:03:05 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:36.913 23:03:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:36.913 23:03:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:36.913 23:03:05 -- scripts/common.sh@363 -- # (( v++ )) 00:20:36.913 23:03:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.913 23:03:05 -- scripts/common.sh@364 -- # decimal 9 00:20:36.913 23:03:05 -- scripts/common.sh@352 -- # local d=9 00:20:36.913 23:03:05 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:36.913 23:03:05 -- scripts/common.sh@354 -- # echo 9 00:20:36.913 23:03:05 -- scripts/common.sh@364 -- # ver1[v]=9 00:20:36.913 23:03:05 -- scripts/common.sh@365 -- # decimal 0 00:20:36.913 23:03:05 -- scripts/common.sh@352 -- # local d=0 00:20:36.913 23:03:05 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:36.913 23:03:05 -- scripts/common.sh@354 -- # echo 0 00:20:36.913 23:03:05 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:36.913 23:03:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:36.913 23:03:05 -- scripts/common.sh@366 -- # return 0 00:20:36.913 23:03:05 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:36.913 23:03:05 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:36.913 23:03:05 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:36.913 23:03:05 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:36.913 23:03:05 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:36.913 23:03:05 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:36.913 23:03:05 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:36.913 23:03:05 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:20:36.913 23:03:05 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:20:36.913 23:03:05 -- fips/fips.sh@114 -- # build_openssl_config 00:20:36.913 23:03:05 -- fips/fips.sh@37 -- # cat 00:20:36.913 23:03:05 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:36.913 23:03:05 -- fips/fips.sh@58 -- # cat - 00:20:36.913 23:03:05 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:36.913 23:03:05 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:36.913 23:03:05 -- fips/fips.sh@117 -- # mapfile -t providers 00:20:36.913 23:03:05 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:20:36.913 23:03:05 -- fips/fips.sh@117 -- # openssl list -providers 00:20:36.913 23:03:05 -- fips/fips.sh@117 -- # grep name 00:20:37.174 23:03:05 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:37.174 23:03:05 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:37.174 23:03:05 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:37.174 23:03:05 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:37.174 23:03:05 -- common/autotest_common.sh@640 -- # local es=0 00:20:37.174 23:03:05 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:37.174 23:03:05 -- fips/fips.sh@128 -- # : 00:20:37.174 23:03:05 -- common/autotest_common.sh@628 -- # local arg=openssl 00:20:37.174 23:03:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:37.174 23:03:05 -- common/autotest_common.sh@632 -- # type -t openssl 00:20:37.174 23:03:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:37.174 23:03:05 -- common/autotest_common.sh@634 -- # type -P openssl 00:20:37.174 23:03:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:37.174 23:03:05 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:20:37.174 23:03:05 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:20:37.174 23:03:05 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:20:37.174 Error setting digest 00:20:37.174 0062428DC87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:37.174 0062428DC87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:37.174 23:03:05 -- common/autotest_common.sh@643 -- # es=1 00:20:37.174 23:03:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:37.175 23:03:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:37.175 23:03:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:37.175 23:03:05 -- fips/fips.sh@131 -- # nvmftestinit 00:20:37.175 23:03:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:37.175 23:03:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.175 23:03:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:37.175 23:03:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:37.175 23:03:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:37.175 23:03:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.175 23:03:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.175 23:03:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.175 23:03:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:37.175 23:03:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:37.175 23:03:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:37.175 23:03:05 -- common/autotest_common.sh@10 -- # set +x 00:20:43.773 23:03:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:43.773 23:03:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:43.773 23:03:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:43.773 23:03:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:43.773 23:03:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:43.773 23:03:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:43.773 23:03:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:43.773 23:03:11 -- nvmf/common.sh@294 -- # net_devs=() 00:20:43.773 23:03:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:43.773 23:03:11 -- nvmf/common.sh@295 -- # e810=() 00:20:43.773 23:03:11 -- nvmf/common.sh@295 -- # local -ga e810 00:20:43.773 23:03:11 -- nvmf/common.sh@296 -- # x722=() 00:20:43.773 23:03:11 -- nvmf/common.sh@296 -- # local -ga x722 00:20:43.773 23:03:11 -- nvmf/common.sh@297 -- # mlx=() 00:20:43.773 23:03:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:43.773 23:03:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.773 23:03:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.773 23:03:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.773 23:03:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.773 23:03:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.773 23:03:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.773 23:03:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.773 23:03:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.773 23:03:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.773 23:03:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.773 23:03:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.773 23:03:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:43.773 23:03:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:43.773 23:03:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:43.773 23:03:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:43.773 23:03:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:43.773 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:43.773 23:03:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:43.773 23:03:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:43.773 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:43.773 23:03:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:43.773 23:03:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:43.773 23:03:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.773 23:03:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:43.773 23:03:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.773 23:03:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:43.773 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:43.773 23:03:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.773 23:03:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:43.773 23:03:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.773 23:03:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:43.773 23:03:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.773 23:03:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:43.773 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:43.773 23:03:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.773 23:03:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:43.773 23:03:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:43.773 23:03:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:43.773 23:03:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:43.773 23:03:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.773 23:03:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.773 23:03:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.773 23:03:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:43.773 23:03:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.773 23:03:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.773 23:03:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:43.773 23:03:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.773 23:03:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.773 23:03:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:43.773 23:03:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:43.773 23:03:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.774 23:03:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.774 23:03:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.774 23:03:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.774 23:03:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:43.774 23:03:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.774 23:03:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.774 23:03:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.774 23:03:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:43.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:20:43.774 00:20:43.774 --- 10.0.0.2 ping statistics --- 00:20:43.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.774 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:20:43.774 23:03:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:20:43.774 00:20:43.774 --- 10.0.0.1 ping statistics --- 00:20:43.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.774 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:20:43.774 23:03:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.774 23:03:11 -- nvmf/common.sh@410 -- # return 0 00:20:43.774 23:03:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:43.774 23:03:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.774 23:03:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:43.774 23:03:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:43.774 23:03:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.774 23:03:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:43.774 23:03:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:43.774 23:03:11 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:43.774 23:03:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:43.774 23:03:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:43.774 23:03:11 -- common/autotest_common.sh@10 -- # set +x 00:20:43.774 23:03:11 -- nvmf/common.sh@469 -- # nvmfpid=4134649 00:20:43.774 23:03:11 -- nvmf/common.sh@470 -- # waitforlisten 4134649 00:20:43.774 23:03:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:43.774 23:03:11 -- common/autotest_common.sh@819 -- # '[' -z 4134649 ']' 00:20:43.774 23:03:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.774 23:03:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:43.774 23:03:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.774 23:03:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:43.774 23:03:11 -- common/autotest_common.sh@10 -- # set +x 00:20:43.774 [2024-06-09 23:03:11.840855] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:43.774 [2024-06-09 23:03:11.840927] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.774 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.774 [2024-06-09 23:03:11.913042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.048 [2024-06-09 23:03:11.983508] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:44.048 [2024-06-09 23:03:11.983632] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.048 [2024-06-09 23:03:11.983641] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.048 [2024-06-09 23:03:11.983648] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.048 [2024-06-09 23:03:11.983673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.625 23:03:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:44.625 23:03:12 -- common/autotest_common.sh@852 -- # return 0 00:20:44.625 23:03:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:44.625 23:03:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:44.625 23:03:12 -- common/autotest_common.sh@10 -- # set +x 00:20:44.625 23:03:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.625 23:03:12 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:44.625 23:03:12 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:44.625 23:03:12 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:44.625 23:03:12 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:44.625 23:03:12 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:44.625 23:03:12 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:44.625 23:03:12 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:44.625 23:03:12 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:44.625 [2024-06-09 23:03:12.766980] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.625 [2024-06-09 23:03:12.782980] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:44.625 [2024-06-09 23:03:12.783179] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.886 malloc0 00:20:44.886 23:03:12 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:44.886 23:03:12 -- fips/fips.sh@148 -- # bdevperf_pid=4135009 00:20:44.886 23:03:12 -- fips/fips.sh@149 -- # waitforlisten 4135009 /var/tmp/bdevperf.sock 00:20:44.886 23:03:12 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.886 23:03:12 -- common/autotest_common.sh@819 -- # '[' -z 4135009 ']' 00:20:44.886 23:03:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.886 23:03:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:44.886 23:03:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.886 23:03:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:44.886 23:03:12 -- common/autotest_common.sh@10 -- # set +x 00:20:44.886 [2024-06-09 23:03:12.891319] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:44.886 [2024-06-09 23:03:12.891372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4135009 ] 00:20:44.886 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.886 [2024-06-09 23:03:12.940781] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.886 [2024-06-09 23:03:12.991164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.829 23:03:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:45.829 23:03:13 -- common/autotest_common.sh@852 -- # return 0 00:20:45.829 23:03:13 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:45.829 [2024-06-09 23:03:13.775896] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.829 TLSTESTn1 00:20:45.829 23:03:13 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:45.829 Running I/O for 10 seconds... 00:20:58.064 00:20:58.064 Latency(us) 00:20:58.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.064 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:58.064 Verification LBA range: start 0x0 length 0x2000 00:20:58.064 TLSTESTn1 : 10.07 1362.92 5.32 0.00 0.00 93705.23 12451.84 106168.32 00:20:58.064 =================================================================================================================== 00:20:58.064 Total : 1362.92 5.32 0.00 0.00 93705.23 12451.84 106168.32 00:20:58.064 0 00:20:58.064 23:03:24 -- fips/fips.sh@1 -- # cleanup 00:20:58.064 23:03:24 -- fips/fips.sh@15 -- # process_shm --id 0 00:20:58.064 23:03:24 -- common/autotest_common.sh@796 -- # type=--id 00:20:58.064 23:03:24 -- common/autotest_common.sh@797 -- # id=0 00:20:58.064 23:03:24 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:20:58.064 23:03:24 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:58.064 23:03:24 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:20:58.064 23:03:24 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:20:58.064 23:03:24 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:20:58.064 23:03:24 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:58.064 nvmf_trace.0 00:20:58.064 23:03:24 -- common/autotest_common.sh@811 -- # return 0 00:20:58.064 23:03:24 -- fips/fips.sh@16 -- # killprocess 4135009 00:20:58.064 23:03:24 -- common/autotest_common.sh@926 -- # '[' -z 4135009 ']' 00:20:58.064 23:03:24 -- common/autotest_common.sh@930 -- # kill -0 4135009 00:20:58.064 23:03:24 -- common/autotest_common.sh@931 -- # uname 00:20:58.064 23:03:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:58.064 23:03:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4135009 00:20:58.064 23:03:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:58.064 23:03:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:58.064 23:03:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4135009' 00:20:58.064 killing process with pid 4135009 00:20:58.064 23:03:24 -- common/autotest_common.sh@945 -- # kill 4135009 00:20:58.064 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.064 00:20:58.064 Latency(us) 00:20:58.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.064 =================================================================================================================== 00:20:58.064 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:58.064 23:03:24 -- common/autotest_common.sh@950 -- # wait 4135009 00:20:58.064 23:03:24 -- fips/fips.sh@17 -- # nvmftestfini 00:20:58.064 23:03:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:58.064 23:03:24 -- nvmf/common.sh@116 -- # sync 00:20:58.064 23:03:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:58.064 23:03:24 -- nvmf/common.sh@119 -- # set +e 00:20:58.064 23:03:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:58.064 23:03:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:58.064 rmmod nvme_tcp 00:20:58.064 rmmod nvme_fabrics 00:20:58.064 rmmod nvme_keyring 00:20:58.064 23:03:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:58.064 23:03:24 -- nvmf/common.sh@123 -- # set -e 00:20:58.064 23:03:24 -- nvmf/common.sh@124 -- # return 0 00:20:58.064 23:03:24 -- nvmf/common.sh@477 -- # '[' -n 4134649 ']' 00:20:58.064 23:03:24 -- nvmf/common.sh@478 -- # killprocess 4134649 00:20:58.064 23:03:24 -- common/autotest_common.sh@926 -- # '[' -z 4134649 ']' 00:20:58.064 23:03:24 -- common/autotest_common.sh@930 -- # kill -0 4134649 00:20:58.064 23:03:24 -- common/autotest_common.sh@931 -- # uname 00:20:58.064 23:03:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:58.064 23:03:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4134649 00:20:58.064 23:03:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:58.064 23:03:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:58.065 23:03:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4134649' 00:20:58.065 killing process with pid 4134649 00:20:58.065 23:03:24 -- common/autotest_common.sh@945 -- # kill 4134649 00:20:58.065 23:03:24 -- common/autotest_common.sh@950 -- # wait 4134649 00:20:58.065 23:03:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:58.065 23:03:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:58.065 23:03:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:58.065 23:03:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:58.065 23:03:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:58.065 23:03:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.065 23:03:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.065 23:03:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.636 23:03:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:58.636 23:03:26 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:58.636 00:20:58.636 real 0m21.812s 00:20:58.636 user 0m21.536s 00:20:58.636 sys 0m10.651s 00:20:58.636 23:03:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:58.636 23:03:26 -- common/autotest_common.sh@10 -- # set +x 00:20:58.636 ************************************ 00:20:58.636 END TEST nvmf_fips 00:20:58.636 ************************************ 00:20:58.636 23:03:26 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:20:58.636 23:03:26 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:58.636 23:03:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:58.636 23:03:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:58.636 23:03:26 -- common/autotest_common.sh@10 -- # set +x 00:20:58.636 ************************************ 00:20:58.636 START TEST nvmf_fuzz 00:20:58.636 ************************************ 00:20:58.636 23:03:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:58.636 * Looking for test storage... 00:20:58.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:58.636 23:03:26 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:58.636 23:03:26 -- nvmf/common.sh@7 -- # uname -s 00:20:58.636 23:03:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:58.636 23:03:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:58.636 23:03:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:58.636 23:03:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:58.636 23:03:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:58.636 23:03:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:58.636 23:03:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:58.636 23:03:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:58.636 23:03:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:58.636 23:03:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:58.636 23:03:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:58.636 23:03:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:58.636 23:03:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:58.636 23:03:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:58.636 23:03:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:58.636 23:03:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:58.636 23:03:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:58.636 23:03:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:58.636 23:03:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:58.636 23:03:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.636 23:03:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.636 23:03:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.637 23:03:26 -- paths/export.sh@5 -- # export PATH 00:20:58.637 23:03:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:58.637 23:03:26 -- nvmf/common.sh@46 -- # : 0 00:20:58.637 23:03:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:58.637 23:03:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:58.637 23:03:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:58.637 23:03:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:58.637 23:03:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:58.637 23:03:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:58.637 23:03:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:58.637 23:03:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:58.897 23:03:26 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:58.897 23:03:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:58.897 23:03:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:58.897 23:03:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:58.897 23:03:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:58.897 23:03:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:58.897 23:03:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.897 23:03:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.897 23:03:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:58.897 23:03:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:58.897 23:03:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:58.897 23:03:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:58.897 23:03:26 -- common/autotest_common.sh@10 -- # set +x 00:21:05.552 23:03:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:05.552 23:03:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:05.552 23:03:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:05.552 23:03:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:05.552 23:03:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:05.552 23:03:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:05.552 23:03:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:05.552 23:03:33 -- nvmf/common.sh@294 -- # net_devs=() 00:21:05.552 23:03:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:05.552 23:03:33 -- nvmf/common.sh@295 -- # e810=() 00:21:05.552 23:03:33 -- nvmf/common.sh@295 -- # local -ga e810 00:21:05.552 23:03:33 -- nvmf/common.sh@296 -- # x722=() 00:21:05.552 23:03:33 -- nvmf/common.sh@296 -- # local -ga x722 00:21:05.552 23:03:33 -- nvmf/common.sh@297 -- # mlx=() 00:21:05.552 23:03:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:05.552 23:03:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.552 23:03:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.552 23:03:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.552 23:03:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.552 23:03:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.552 23:03:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.552 23:03:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.552 23:03:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.552 23:03:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.552 23:03:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.552 23:03:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.552 23:03:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:05.552 23:03:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:05.552 23:03:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:05.552 23:03:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:05.552 23:03:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:05.552 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:05.552 23:03:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:05.552 23:03:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:05.552 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:05.552 23:03:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:05.552 23:03:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:05.552 23:03:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.552 23:03:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:05.552 23:03:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.552 23:03:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:05.552 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:05.552 23:03:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.552 23:03:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:05.552 23:03:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.552 23:03:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:05.552 23:03:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.552 23:03:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:05.552 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:05.552 23:03:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.552 23:03:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:05.552 23:03:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:05.552 23:03:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:05.552 23:03:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:05.552 23:03:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.552 23:03:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.552 23:03:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.552 23:03:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:05.552 23:03:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.552 23:03:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.552 23:03:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:05.552 23:03:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.552 23:03:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.552 23:03:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:05.552 23:03:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:05.552 23:03:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.552 23:03:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.552 23:03:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.552 23:03:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.552 23:03:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:05.552 23:03:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.552 23:03:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.552 23:03:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.552 23:03:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:05.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:21:05.552 00:21:05.552 --- 10.0.0.2 ping statistics --- 00:21:05.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.552 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:21:05.552 23:03:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.441 ms 00:21:05.553 00:21:05.553 --- 10.0.0.1 ping statistics --- 00:21:05.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.553 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:21:05.553 23:03:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.553 23:03:33 -- nvmf/common.sh@410 -- # return 0 00:21:05.553 23:03:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:05.553 23:03:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.553 23:03:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:05.553 23:03:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:05.553 23:03:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.553 23:03:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:05.553 23:03:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:05.553 23:03:33 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=4141387 00:21:05.553 23:03:33 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:05.553 23:03:33 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:05.553 23:03:33 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 4141387 00:21:05.553 23:03:33 -- common/autotest_common.sh@819 -- # '[' -z 4141387 ']' 00:21:05.553 23:03:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.553 23:03:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:05.553 23:03:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.553 23:03:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:05.553 23:03:33 -- common/autotest_common.sh@10 -- # set +x 00:21:06.497 23:03:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:06.497 23:03:34 -- common/autotest_common.sh@852 -- # return 0 00:21:06.497 23:03:34 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:06.497 23:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.497 23:03:34 -- common/autotest_common.sh@10 -- # set +x 00:21:06.497 23:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.497 23:03:34 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:06.497 23:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.497 23:03:34 -- common/autotest_common.sh@10 -- # set +x 00:21:06.497 Malloc0 00:21:06.497 23:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.497 23:03:34 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:06.497 23:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.497 23:03:34 -- common/autotest_common.sh@10 -- # set +x 00:21:06.497 23:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.497 23:03:34 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:06.497 23:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.497 23:03:34 -- common/autotest_common.sh@10 -- # set +x 00:21:06.497 23:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.497 23:03:34 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.497 23:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:06.497 23:03:34 -- common/autotest_common.sh@10 -- # set +x 00:21:06.497 23:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:06.497 23:03:34 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:06.497 23:03:34 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:38.620 Fuzzing completed. Shutting down the fuzz application 00:21:38.620 00:21:38.620 Dumping successful admin opcodes: 00:21:38.620 8, 9, 10, 24, 00:21:38.620 Dumping successful io opcodes: 00:21:38.620 0, 9, 00:21:38.620 NS: 0x200003aeff00 I/O qp, Total commands completed: 730239, total successful commands: 4265, random_seed: 3148921856 00:21:38.620 NS: 0x200003aeff00 admin qp, Total commands completed: 81167, total successful commands: 646, random_seed: 528281792 00:21:38.620 23:04:04 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:38.620 Fuzzing completed. Shutting down the fuzz application 00:21:38.620 00:21:38.620 Dumping successful admin opcodes: 00:21:38.620 24, 00:21:38.620 Dumping successful io opcodes: 00:21:38.620 00:21:38.620 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2392997147 00:21:38.620 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2393119975 00:21:38.620 23:04:06 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:38.620 23:04:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.620 23:04:06 -- common/autotest_common.sh@10 -- # set +x 00:21:38.620 23:04:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.620 23:04:06 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:38.620 23:04:06 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:38.620 23:04:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:38.620 23:04:06 -- nvmf/common.sh@116 -- # sync 00:21:38.620 23:04:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:38.620 23:04:06 -- nvmf/common.sh@119 -- # set +e 00:21:38.620 23:04:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:38.620 23:04:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:38.620 rmmod nvme_tcp 00:21:38.620 rmmod nvme_fabrics 00:21:38.620 rmmod nvme_keyring 00:21:38.620 23:04:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:38.620 23:04:06 -- nvmf/common.sh@123 -- # set -e 00:21:38.620 23:04:06 -- nvmf/common.sh@124 -- # return 0 00:21:38.620 23:04:06 -- nvmf/common.sh@477 -- # '[' -n 4141387 ']' 00:21:38.620 23:04:06 -- nvmf/common.sh@478 -- # killprocess 4141387 00:21:38.620 23:04:06 -- common/autotest_common.sh@926 -- # '[' -z 4141387 ']' 00:21:38.620 23:04:06 -- common/autotest_common.sh@930 -- # kill -0 4141387 00:21:38.620 23:04:06 -- common/autotest_common.sh@931 -- # uname 00:21:38.620 23:04:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:38.620 23:04:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4141387 00:21:38.620 23:04:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:38.620 23:04:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:38.620 23:04:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4141387' 00:21:38.620 killing process with pid 4141387 00:21:38.620 23:04:06 -- common/autotest_common.sh@945 -- # kill 4141387 00:21:38.620 23:04:06 -- common/autotest_common.sh@950 -- # wait 4141387 00:21:38.620 23:04:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:38.620 23:04:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:38.620 23:04:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:38.620 23:04:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:38.620 23:04:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:38.620 23:04:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.620 23:04:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.620 23:04:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.538 23:04:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:40.538 23:04:08 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:40.538 00:21:40.538 real 0m41.963s 00:21:40.538 user 0m55.452s 00:21:40.538 sys 0m15.816s 00:21:40.538 23:04:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.538 23:04:08 -- common/autotest_common.sh@10 -- # set +x 00:21:40.538 ************************************ 00:21:40.538 END TEST nvmf_fuzz 00:21:40.538 ************************************ 00:21:40.538 23:04:08 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:40.538 23:04:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:40.538 23:04:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:40.538 23:04:08 -- common/autotest_common.sh@10 -- # set +x 00:21:40.538 ************************************ 00:21:40.538 START TEST nvmf_multiconnection 00:21:40.538 ************************************ 00:21:40.538 23:04:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:40.799 * Looking for test storage... 00:21:40.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:40.800 23:04:08 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.800 23:04:08 -- nvmf/common.sh@7 -- # uname -s 00:21:40.800 23:04:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.800 23:04:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.800 23:04:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.800 23:04:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.800 23:04:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.800 23:04:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.800 23:04:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.800 23:04:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.800 23:04:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.800 23:04:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.800 23:04:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:40.800 23:04:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:40.800 23:04:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.800 23:04:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.800 23:04:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.800 23:04:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.800 23:04:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.800 23:04:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.800 23:04:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.800 23:04:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.800 23:04:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.800 23:04:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.800 23:04:08 -- paths/export.sh@5 -- # export PATH 00:21:40.800 23:04:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.800 23:04:08 -- nvmf/common.sh@46 -- # : 0 00:21:40.800 23:04:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:40.800 23:04:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:40.800 23:04:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:40.800 23:04:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.800 23:04:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.800 23:04:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:40.800 23:04:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:40.800 23:04:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:40.800 23:04:08 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:40.800 23:04:08 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:40.800 23:04:08 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:40.800 23:04:08 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:40.800 23:04:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:40.800 23:04:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.800 23:04:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:40.800 23:04:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:40.800 23:04:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:40.800 23:04:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.800 23:04:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.800 23:04:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.800 23:04:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:40.800 23:04:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:40.800 23:04:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:40.800 23:04:08 -- common/autotest_common.sh@10 -- # set +x 00:21:47.402 23:04:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:47.402 23:04:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:47.402 23:04:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:47.402 23:04:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:47.403 23:04:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:47.403 23:04:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:47.403 23:04:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:47.403 23:04:14 -- nvmf/common.sh@294 -- # net_devs=() 00:21:47.403 23:04:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:47.403 23:04:14 -- nvmf/common.sh@295 -- # e810=() 00:21:47.403 23:04:14 -- nvmf/common.sh@295 -- # local -ga e810 00:21:47.403 23:04:14 -- nvmf/common.sh@296 -- # x722=() 00:21:47.403 23:04:14 -- nvmf/common.sh@296 -- # local -ga x722 00:21:47.403 23:04:14 -- nvmf/common.sh@297 -- # mlx=() 00:21:47.403 23:04:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:47.403 23:04:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.403 23:04:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.403 23:04:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.403 23:04:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.403 23:04:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.403 23:04:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.403 23:04:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.403 23:04:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.403 23:04:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.403 23:04:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.403 23:04:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.403 23:04:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:47.403 23:04:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:47.403 23:04:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:47.403 23:04:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:47.403 23:04:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:47.403 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:47.403 23:04:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:47.403 23:04:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:47.403 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:47.403 23:04:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:47.403 23:04:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:47.403 23:04:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.403 23:04:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:47.403 23:04:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.403 23:04:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:47.403 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:47.403 23:04:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.403 23:04:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:47.403 23:04:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.403 23:04:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:47.403 23:04:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.403 23:04:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:47.403 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:47.403 23:04:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.403 23:04:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:47.403 23:04:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:47.403 23:04:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:47.403 23:04:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:47.403 23:04:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.403 23:04:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.403 23:04:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.403 23:04:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:47.403 23:04:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.403 23:04:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.403 23:04:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:47.403 23:04:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.403 23:04:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.403 23:04:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:47.403 23:04:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:47.403 23:04:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.403 23:04:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.403 23:04:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.403 23:04:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.403 23:04:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:47.403 23:04:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.403 23:04:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.403 23:04:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.403 23:04:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:47.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:21:47.403 00:21:47.403 --- 10.0.0.2 ping statistics --- 00:21:47.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.403 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:21:47.403 23:04:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:21:47.403 00:21:47.403 --- 10.0.0.1 ping statistics --- 00:21:47.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.403 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:21:47.403 23:04:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.403 23:04:15 -- nvmf/common.sh@410 -- # return 0 00:21:47.403 23:04:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:47.403 23:04:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.403 23:04:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:47.403 23:04:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:47.403 23:04:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.403 23:04:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:47.403 23:04:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:47.403 23:04:15 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:47.403 23:04:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:47.403 23:04:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:47.403 23:04:15 -- common/autotest_common.sh@10 -- # set +x 00:21:47.403 23:04:15 -- nvmf/common.sh@469 -- # nvmfpid=4151798 00:21:47.403 23:04:15 -- nvmf/common.sh@470 -- # waitforlisten 4151798 00:21:47.403 23:04:15 -- common/autotest_common.sh@819 -- # '[' -z 4151798 ']' 00:21:47.403 23:04:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.403 23:04:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:47.403 23:04:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.403 23:04:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:47.403 23:04:15 -- common/autotest_common.sh@10 -- # set +x 00:21:47.403 23:04:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:47.403 [2024-06-09 23:04:15.228675] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:47.403 [2024-06-09 23:04:15.228739] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.403 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.403 [2024-06-09 23:04:15.297894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:47.403 [2024-06-09 23:04:15.371223] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:47.403 [2024-06-09 23:04:15.371356] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.403 [2024-06-09 23:04:15.371367] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.403 [2024-06-09 23:04:15.371375] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.403 [2024-06-09 23:04:15.371512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.403 [2024-06-09 23:04:15.371699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.403 [2024-06-09 23:04:15.371701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:47.403 [2024-06-09 23:04:15.371548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.977 23:04:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:47.977 23:04:15 -- common/autotest_common.sh@852 -- # return 0 00:21:47.977 23:04:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:47.977 23:04:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:47.977 23:04:15 -- common/autotest_common.sh@10 -- # set +x 00:21:47.977 23:04:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.977 23:04:16 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:47.977 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.977 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:47.977 [2024-06-09 23:04:16.040595] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.977 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.977 23:04:16 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:47.977 23:04:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:47.977 23:04:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:47.977 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.977 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:47.977 Malloc1 00:21:47.977 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.977 23:04:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:47.977 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.977 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:47.977 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.977 23:04:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:47.977 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.977 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:47.977 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.977 23:04:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.977 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.977 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:47.977 [2024-06-09 23:04:16.108034] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.977 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.977 23:04:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:47.977 23:04:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:47.977 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.977 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:47.977 Malloc2 00:21:47.977 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.977 23:04:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:47.977 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.977 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:47.977 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.977 23:04:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:47.977 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.977 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.239 23:04:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 Malloc3 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.239 23:04:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 Malloc4 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.239 23:04:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 Malloc5 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.239 23:04:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 Malloc6 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.239 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.239 23:04:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.239 23:04:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:48.239 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.239 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.240 Malloc7 00:21:48.240 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.240 23:04:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:48.240 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.240 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.240 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.240 23:04:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:48.240 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.240 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.240 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.240 23:04:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:48.240 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.240 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.501 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.501 23:04:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.501 23:04:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:48.501 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.501 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.501 Malloc8 00:21:48.501 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.501 23:04:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:48.501 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.501 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.501 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.501 23:04:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:48.501 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.501 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.501 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.501 23:04:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:48.501 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.501 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.501 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.501 23:04:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.501 23:04:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:48.502 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.502 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.502 Malloc9 00:21:48.502 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.502 23:04:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:48.502 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.502 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.502 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.502 23:04:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:48.502 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.502 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.502 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.502 23:04:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:48.502 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.502 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.502 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.502 23:04:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.502 23:04:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:48.502 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.502 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.502 Malloc10 00:21:48.502 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.502 23:04:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:48.502 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.502 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.502 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.502 23:04:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:48.502 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.502 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.502 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.502 23:04:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:48.502 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.502 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.502 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.502 23:04:16 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.502 23:04:16 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:48.502 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.502 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.502 Malloc11 00:21:48.502 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.502 23:04:16 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:48.502 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.502 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.502 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.502 23:04:16 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:48.502 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.502 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.502 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.502 23:04:16 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:48.502 23:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.502 23:04:16 -- common/autotest_common.sh@10 -- # set +x 00:21:48.502 23:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.502 23:04:16 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:48.502 23:04:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:48.502 23:04:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:50.419 23:04:18 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:50.419 23:04:18 -- common/autotest_common.sh@1177 -- # local i=0 00:21:50.419 23:04:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:50.419 23:04:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:50.419 23:04:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:52.333 23:04:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:52.333 23:04:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:52.333 23:04:20 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:21:52.333 23:04:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:52.333 23:04:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:52.333 23:04:20 -- common/autotest_common.sh@1187 -- # return 0 00:21:52.333 23:04:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:52.333 23:04:20 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:21:53.749 23:04:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:53.749 23:04:21 -- common/autotest_common.sh@1177 -- # local i=0 00:21:53.749 23:04:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:53.749 23:04:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:53.749 23:04:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:55.660 23:04:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:55.660 23:04:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:55.660 23:04:23 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:21:55.660 23:04:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:55.660 23:04:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:55.660 23:04:23 -- common/autotest_common.sh@1187 -- # return 0 00:21:55.660 23:04:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.660 23:04:23 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:21:57.571 23:04:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:57.571 23:04:25 -- common/autotest_common.sh@1177 -- # local i=0 00:21:57.571 23:04:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:57.571 23:04:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:57.571 23:04:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:59.486 23:04:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:59.486 23:04:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:59.486 23:04:27 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:21:59.486 23:04:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:59.486 23:04:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:59.486 23:04:27 -- common/autotest_common.sh@1187 -- # return 0 00:21:59.486 23:04:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.487 23:04:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:00.873 23:04:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:00.873 23:04:29 -- common/autotest_common.sh@1177 -- # local i=0 00:22:00.873 23:04:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:00.873 23:04:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:00.873 23:04:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:03.416 23:04:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:03.416 23:04:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:03.416 23:04:31 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:22:03.416 23:04:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:03.416 23:04:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:03.416 23:04:31 -- common/autotest_common.sh@1187 -- # return 0 00:22:03.416 23:04:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.416 23:04:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:04.799 23:04:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:04.799 23:04:32 -- common/autotest_common.sh@1177 -- # local i=0 00:22:04.799 23:04:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:04.799 23:04:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:04.799 23:04:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:06.716 23:04:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:06.716 23:04:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:06.716 23:04:34 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:22:06.716 23:04:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:06.716 23:04:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:06.716 23:04:34 -- common/autotest_common.sh@1187 -- # return 0 00:22:06.716 23:04:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.716 23:04:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:08.633 23:04:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:08.633 23:04:36 -- common/autotest_common.sh@1177 -- # local i=0 00:22:08.633 23:04:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:08.633 23:04:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:08.633 23:04:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:10.548 23:04:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:10.548 23:04:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:10.548 23:04:38 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:22:10.548 23:04:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:10.548 23:04:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:10.548 23:04:38 -- common/autotest_common.sh@1187 -- # return 0 00:22:10.548 23:04:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:10.548 23:04:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:12.461 23:04:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:12.461 23:04:40 -- common/autotest_common.sh@1177 -- # local i=0 00:22:12.461 23:04:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:12.461 23:04:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:12.461 23:04:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:14.403 23:04:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:14.403 23:04:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:14.403 23:04:42 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:22:14.403 23:04:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:14.403 23:04:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:14.403 23:04:42 -- common/autotest_common.sh@1187 -- # return 0 00:22:14.403 23:04:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.403 23:04:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:16.312 23:04:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:16.312 23:04:43 -- common/autotest_common.sh@1177 -- # local i=0 00:22:16.312 23:04:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:16.312 23:04:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:16.312 23:04:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:18.225 23:04:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:18.225 23:04:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:18.225 23:04:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:22:18.225 23:04:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:18.225 23:04:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:18.225 23:04:46 -- common/autotest_common.sh@1187 -- # return 0 00:22:18.225 23:04:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:18.225 23:04:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:20.140 23:04:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:20.140 23:04:47 -- common/autotest_common.sh@1177 -- # local i=0 00:22:20.140 23:04:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:20.140 23:04:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:20.140 23:04:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:22.052 23:04:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:22.052 23:04:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:22.052 23:04:49 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:22:22.052 23:04:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:22.052 23:04:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:22.052 23:04:49 -- common/autotest_common.sh@1187 -- # return 0 00:22:22.052 23:04:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:22.052 23:04:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:23.968 23:04:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:23.968 23:04:51 -- common/autotest_common.sh@1177 -- # local i=0 00:22:23.968 23:04:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:23.968 23:04:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:23.968 23:04:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:25.879 23:04:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:25.879 23:04:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:25.879 23:04:53 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:22:25.879 23:04:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:25.879 23:04:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:25.879 23:04:53 -- common/autotest_common.sh@1187 -- # return 0 00:22:25.879 23:04:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:25.879 23:04:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:27.790 23:04:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:27.790 23:04:55 -- common/autotest_common.sh@1177 -- # local i=0 00:22:27.790 23:04:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:27.790 23:04:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:27.790 23:04:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:29.704 23:04:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:29.704 23:04:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:29.704 23:04:57 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:22:29.704 23:04:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:29.704 23:04:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:29.704 23:04:57 -- common/autotest_common.sh@1187 -- # return 0 00:22:29.704 23:04:57 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:29.704 [global] 00:22:29.704 thread=1 00:22:29.704 invalidate=1 00:22:29.704 rw=read 00:22:29.704 time_based=1 00:22:29.704 runtime=10 00:22:29.704 ioengine=libaio 00:22:29.704 direct=1 00:22:29.704 bs=262144 00:22:29.704 iodepth=64 00:22:29.704 norandommap=1 00:22:29.704 numjobs=1 00:22:29.704 00:22:29.704 [job0] 00:22:29.704 filename=/dev/nvme0n1 00:22:29.704 [job1] 00:22:29.704 filename=/dev/nvme10n1 00:22:29.704 [job2] 00:22:29.704 filename=/dev/nvme1n1 00:22:29.704 [job3] 00:22:29.704 filename=/dev/nvme2n1 00:22:29.704 [job4] 00:22:29.704 filename=/dev/nvme3n1 00:22:29.704 [job5] 00:22:29.704 filename=/dev/nvme4n1 00:22:29.704 [job6] 00:22:29.704 filename=/dev/nvme5n1 00:22:29.704 [job7] 00:22:29.704 filename=/dev/nvme6n1 00:22:29.704 [job8] 00:22:29.704 filename=/dev/nvme7n1 00:22:29.704 [job9] 00:22:29.704 filename=/dev/nvme8n1 00:22:29.704 [job10] 00:22:29.704 filename=/dev/nvme9n1 00:22:30.008 Could not set queue depth (nvme0n1) 00:22:30.008 Could not set queue depth (nvme10n1) 00:22:30.008 Could not set queue depth (nvme1n1) 00:22:30.008 Could not set queue depth (nvme2n1) 00:22:30.008 Could not set queue depth (nvme3n1) 00:22:30.008 Could not set queue depth (nvme4n1) 00:22:30.008 Could not set queue depth (nvme5n1) 00:22:30.008 Could not set queue depth (nvme6n1) 00:22:30.008 Could not set queue depth (nvme7n1) 00:22:30.008 Could not set queue depth (nvme8n1) 00:22:30.008 Could not set queue depth (nvme9n1) 00:22:30.281 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.281 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.281 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.281 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.281 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.281 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.281 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.281 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.281 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.281 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.281 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.281 fio-3.35 00:22:30.281 Starting 11 threads 00:22:42.586 00:22:42.586 job0: (groupid=0, jobs=1): err= 0: pid=4160527: Sun Jun 9 23:05:08 2024 00:22:42.586 read: IOPS=597, BW=149MiB/s (157MB/s)(1509MiB/10110msec) 00:22:42.586 slat (usec): min=8, max=395433, avg=1352.58, stdev=9029.16 00:22:42.586 clat (msec): min=8, max=578, avg=105.69, stdev=76.76 00:22:42.586 lat (msec): min=8, max=578, avg=107.04, stdev=77.43 00:22:42.586 clat percentiles (msec): 00:22:42.586 | 1.00th=[ 13], 5.00th=[ 32], 10.00th=[ 47], 20.00th=[ 60], 00:22:42.586 | 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 92], 00:22:42.586 | 70.00th=[ 110], 80.00th=[ 140], 90.00th=[ 197], 95.00th=[ 284], 00:22:42.586 | 99.00th=[ 414], 99.50th=[ 535], 99.90th=[ 575], 99.95th=[ 575], 00:22:42.586 | 99.99th=[ 575] 00:22:42.586 bw ( KiB/s): min=34816, max=245248, per=7.11%, avg=152934.40, stdev=61103.38, samples=20 00:22:42.586 iops : min= 136, max= 958, avg=597.40, stdev=238.69, samples=20 00:22:42.586 lat (msec) : 10=0.08%, 20=2.22%, 50=9.71%, 100=52.72%, 250=28.84% 00:22:42.586 lat (msec) : 500=5.91%, 750=0.51% 00:22:42.586 cpu : usr=0.26%, sys=1.92%, ctx=1544, majf=0, minf=4097 00:22:42.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:42.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.586 issued rwts: total=6037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.586 job1: (groupid=0, jobs=1): err= 0: pid=4160546: Sun Jun 9 23:05:08 2024 00:22:42.586 read: IOPS=728, BW=182MiB/s (191MB/s)(1833MiB/10068msec) 00:22:42.586 slat (usec): min=6, max=365457, avg=1214.62, stdev=5632.07 00:22:42.586 clat (msec): min=16, max=547, avg=86.60, stdev=52.53 00:22:42.586 lat (msec): min=16, max=547, avg=87.81, stdev=52.86 00:22:42.586 clat percentiles (msec): 00:22:42.586 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 52], 20.00th=[ 61], 00:22:42.586 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 86], 00:22:42.586 | 70.00th=[ 94], 80.00th=[ 102], 90.00th=[ 114], 95.00th=[ 136], 00:22:42.586 | 99.00th=[ 338], 99.50th=[ 542], 99.90th=[ 550], 99.95th=[ 550], 00:22:42.586 | 99.99th=[ 550] 00:22:42.586 bw ( KiB/s): min= 3591, max=274432, per=8.65%, avg=186086.75, stdev=66531.93, samples=20 00:22:42.586 iops : min= 14, max= 1072, avg=726.90, stdev=259.89, samples=20 00:22:42.586 lat (msec) : 20=0.07%, 50=8.82%, 100=69.41%, 250=20.20%, 500=0.74% 00:22:42.586 lat (msec) : 750=0.76% 00:22:42.586 cpu : usr=0.26%, sys=2.33%, ctx=1570, majf=0, minf=4097 00:22:42.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:42.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.586 issued rwts: total=7332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.586 job2: (groupid=0, jobs=1): err= 0: pid=4160566: Sun Jun 9 23:05:08 2024 00:22:42.586 read: IOPS=702, BW=176MiB/s (184MB/s)(1766MiB/10055msec) 00:22:42.586 slat (usec): min=6, max=96209, avg=1317.41, stdev=4034.09 00:22:42.587 clat (msec): min=23, max=252, avg=89.67, stdev=27.48 00:22:42.587 lat (msec): min=23, max=252, avg=90.98, stdev=27.76 00:22:42.587 clat percentiles (msec): 00:22:42.587 | 1.00th=[ 37], 5.00th=[ 50], 10.00th=[ 56], 20.00th=[ 66], 00:22:42.587 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 88], 60.00th=[ 94], 00:22:42.587 | 70.00th=[ 103], 80.00th=[ 111], 90.00th=[ 126], 95.00th=[ 140], 00:22:42.587 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 186], 99.95th=[ 192], 00:22:42.587 | 99.99th=[ 253] 00:22:42.587 bw ( KiB/s): min=102912, max=280064, per=8.33%, avg=179225.60, stdev=42527.15, samples=20 00:22:42.587 iops : min= 402, max= 1094, avg=700.10, stdev=166.12, samples=20 00:22:42.587 lat (msec) : 50=5.56%, 100=62.73%, 250=31.70%, 500=0.01% 00:22:42.587 cpu : usr=0.26%, sys=2.48%, ctx=1571, majf=0, minf=4097 00:22:42.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:42.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.587 issued rwts: total=7064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.587 job3: (groupid=0, jobs=1): err= 0: pid=4160578: Sun Jun 9 23:05:08 2024 00:22:42.587 read: IOPS=893, BW=223MiB/s (234MB/s)(2253MiB/10086msec) 00:22:42.587 slat (usec): min=6, max=146809, avg=734.08, stdev=3418.74 00:22:42.587 clat (msec): min=2, max=271, avg=70.80, stdev=34.99 00:22:42.587 lat (msec): min=2, max=277, avg=71.54, stdev=35.44 00:22:42.587 clat percentiles (msec): 00:22:42.587 | 1.00th=[ 21], 5.00th=[ 30], 10.00th=[ 36], 20.00th=[ 44], 00:22:42.587 | 30.00th=[ 51], 40.00th=[ 59], 50.00th=[ 66], 60.00th=[ 72], 00:22:42.587 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 130], 00:22:42.587 | 99.00th=[ 215], 99.50th=[ 234], 99.90th=[ 271], 99.95th=[ 271], 00:22:42.587 | 99.99th=[ 271] 00:22:42.587 bw ( KiB/s): min=107008, max=374272, per=10.66%, avg=229137.05, stdev=73502.99, samples=20 00:22:42.587 iops : min= 418, max= 1462, avg=895.05, stdev=286.95, samples=20 00:22:42.587 lat (msec) : 4=0.06%, 10=0.28%, 20=0.65%, 50=27.72%, 100=56.67% 00:22:42.587 lat (msec) : 250=14.31%, 500=0.31% 00:22:42.587 cpu : usr=0.44%, sys=2.97%, ctx=2712, majf=0, minf=4097 00:22:42.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:42.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.587 issued rwts: total=9013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.587 job4: (groupid=0, jobs=1): err= 0: pid=4160584: Sun Jun 9 23:05:08 2024 00:22:42.587 read: IOPS=672, BW=168MiB/s (176MB/s)(1690MiB/10044msec) 00:22:42.587 slat (usec): min=8, max=195523, avg=1053.59, stdev=5546.59 00:22:42.587 clat (msec): min=4, max=508, avg=93.93, stdev=68.40 00:22:42.587 lat (msec): min=4, max=512, avg=94.98, stdev=69.04 00:22:42.587 clat percentiles (msec): 00:22:42.587 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 35], 20.00th=[ 45], 00:22:42.587 | 30.00th=[ 54], 40.00th=[ 65], 50.00th=[ 81], 60.00th=[ 95], 00:22:42.587 | 70.00th=[ 106], 80.00th=[ 124], 90.00th=[ 161], 95.00th=[ 218], 00:22:42.587 | 99.00th=[ 384], 99.50th=[ 502], 99.90th=[ 510], 99.95th=[ 510], 00:22:42.587 | 99.99th=[ 510] 00:22:42.587 bw ( KiB/s): min=42496, max=354816, per=7.97%, avg=171402.40, stdev=85685.97, samples=20 00:22:42.587 iops : min= 166, max= 1386, avg=669.50, stdev=334.74, samples=20 00:22:42.587 lat (msec) : 10=0.43%, 20=1.73%, 50=23.26%, 100=40.03%, 250=31.15% 00:22:42.587 lat (msec) : 500=2.86%, 750=0.55% 00:22:42.587 cpu : usr=0.29%, sys=2.27%, ctx=1921, majf=0, minf=4097 00:22:42.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:42.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.587 issued rwts: total=6758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.587 job5: (groupid=0, jobs=1): err= 0: pid=4160610: Sun Jun 9 23:05:08 2024 00:22:42.587 read: IOPS=1096, BW=274MiB/s (287MB/s)(2758MiB/10062msec) 00:22:42.587 slat (usec): min=7, max=69041, avg=822.75, stdev=2333.08 00:22:42.587 clat (msec): min=2, max=155, avg=57.48, stdev=24.28 00:22:42.587 lat (msec): min=2, max=155, avg=58.30, stdev=24.56 00:22:42.587 clat percentiles (msec): 00:22:42.587 | 1.00th=[ 15], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 39], 00:22:42.587 | 30.00th=[ 42], 40.00th=[ 44], 50.00th=[ 48], 60.00th=[ 56], 00:22:42.587 | 70.00th=[ 65], 80.00th=[ 83], 90.00th=[ 97], 95.00th=[ 104], 00:22:42.587 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 131], 99.95th=[ 138], 00:22:42.587 | 99.99th=[ 155] 00:22:42.587 bw ( KiB/s): min=160256, max=392192, per=13.06%, avg=280806.40, stdev=90250.07, samples=20 00:22:42.587 iops : min= 626, max= 1532, avg=1096.90, stdev=352.54, samples=20 00:22:42.587 lat (msec) : 4=0.21%, 10=0.57%, 20=0.41%, 50=52.62%, 100=39.00% 00:22:42.587 lat (msec) : 250=7.20% 00:22:42.587 cpu : usr=0.48%, sys=3.70%, ctx=2685, majf=0, minf=3534 00:22:42.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:42.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.587 issued rwts: total=11032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.587 job6: (groupid=0, jobs=1): err= 0: pid=4160622: Sun Jun 9 23:05:08 2024 00:22:42.587 read: IOPS=717, BW=179MiB/s (188MB/s)(1804MiB/10062msec) 00:22:42.587 slat (usec): min=6, max=77748, avg=1162.70, stdev=3595.05 00:22:42.587 clat (msec): min=15, max=198, avg=87.97, stdev=27.09 00:22:42.587 lat (msec): min=15, max=208, avg=89.13, stdev=27.51 00:22:42.587 clat percentiles (msec): 00:22:42.587 | 1.00th=[ 32], 5.00th=[ 50], 10.00th=[ 57], 20.00th=[ 68], 00:22:42.587 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 92], 00:22:42.587 | 70.00th=[ 99], 80.00th=[ 105], 90.00th=[ 127], 95.00th=[ 144], 00:22:42.587 | 99.00th=[ 165], 99.50th=[ 171], 99.90th=[ 184], 99.95th=[ 192], 00:22:42.587 | 99.99th=[ 199] 00:22:42.587 bw ( KiB/s): min=111104, max=248832, per=8.52%, avg=183116.80, stdev=38947.18, samples=20 00:22:42.587 iops : min= 434, max= 972, avg=715.30, stdev=152.14, samples=20 00:22:42.587 lat (msec) : 20=0.15%, 50=5.27%, 100=67.25%, 250=27.33% 00:22:42.587 cpu : usr=0.15%, sys=2.22%, ctx=1919, majf=0, minf=4097 00:22:42.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:42.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.587 issued rwts: total=7216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.587 job7: (groupid=0, jobs=1): err= 0: pid=4160633: Sun Jun 9 23:05:08 2024 00:22:42.587 read: IOPS=705, BW=176MiB/s (185MB/s)(1778MiB/10086msec) 00:22:42.587 slat (usec): min=6, max=80425, avg=1287.08, stdev=3643.65 00:22:42.587 clat (msec): min=34, max=215, avg=89.35, stdev=22.55 00:22:42.587 lat (msec): min=34, max=219, avg=90.63, stdev=22.86 00:22:42.587 clat percentiles (msec): 00:22:42.587 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 71], 00:22:42.587 | 30.00th=[ 78], 40.00th=[ 83], 50.00th=[ 88], 60.00th=[ 93], 00:22:42.587 | 70.00th=[ 99], 80.00th=[ 105], 90.00th=[ 120], 95.00th=[ 129], 00:22:42.587 | 99.00th=[ 161], 99.50th=[ 174], 99.90th=[ 205], 99.95th=[ 207], 00:22:42.587 | 99.99th=[ 215] 00:22:42.587 bw ( KiB/s): min=135168, max=263680, per=8.39%, avg=180454.40, stdev=28263.67, samples=20 00:22:42.587 iops : min= 528, max= 1030, avg=704.90, stdev=110.40, samples=20 00:22:42.587 lat (msec) : 50=1.74%, 100=71.51%, 250=26.74% 00:22:42.587 cpu : usr=0.22%, sys=2.53%, ctx=1717, majf=0, minf=4097 00:22:42.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:42.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.587 issued rwts: total=7112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.587 job8: (groupid=0, jobs=1): err= 0: pid=4160665: Sun Jun 9 23:05:08 2024 00:22:42.587 read: IOPS=800, BW=200MiB/s (210MB/s)(2014MiB/10055msec) 00:22:42.587 slat (usec): min=5, max=110840, avg=1064.66, stdev=3497.92 00:22:42.587 clat (msec): min=10, max=237, avg=78.77, stdev=28.95 00:22:42.587 lat (msec): min=10, max=237, avg=79.83, stdev=29.26 00:22:42.587 clat percentiles (msec): 00:22:42.587 | 1.00th=[ 28], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 56], 00:22:42.587 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 84], 00:22:42.587 | 70.00th=[ 92], 80.00th=[ 100], 90.00th=[ 110], 95.00th=[ 122], 00:22:42.587 | 99.00th=[ 199], 99.50th=[ 211], 99.90th=[ 230], 99.95th=[ 236], 00:22:42.587 | 99.99th=[ 239] 00:22:42.587 bw ( KiB/s): min=134144, max=317952, per=9.51%, avg=204569.60, stdev=51630.29, samples=20 00:22:42.587 iops : min= 524, max= 1242, avg=799.10, stdev=201.68, samples=20 00:22:42.587 lat (msec) : 20=0.47%, 50=14.69%, 100=66.54%, 250=18.30% 00:22:42.587 cpu : usr=0.26%, sys=2.53%, ctx=2061, majf=0, minf=4097 00:22:42.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:42.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.587 issued rwts: total=8054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.587 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.587 job9: (groupid=0, jobs=1): err= 0: pid=4160678: Sun Jun 9 23:05:08 2024 00:22:42.587 read: IOPS=639, BW=160MiB/s (168MB/s)(1614MiB/10092msec) 00:22:42.587 slat (usec): min=6, max=77491, avg=1335.93, stdev=3883.22 00:22:42.587 clat (msec): min=17, max=232, avg=98.56, stdev=27.59 00:22:42.587 lat (msec): min=17, max=232, avg=99.90, stdev=27.94 00:22:42.587 clat percentiles (msec): 00:22:42.587 | 1.00th=[ 36], 5.00th=[ 54], 10.00th=[ 66], 20.00th=[ 77], 00:22:42.587 | 30.00th=[ 86], 40.00th=[ 93], 50.00th=[ 97], 60.00th=[ 103], 00:22:42.587 | 70.00th=[ 109], 80.00th=[ 120], 90.00th=[ 136], 95.00th=[ 146], 00:22:42.587 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 222], 99.95th=[ 222], 00:22:42.587 | 99.99th=[ 232] 00:22:42.588 bw ( KiB/s): min=111326, max=242176, per=7.61%, avg=163697.50, stdev=32957.21, samples=20 00:22:42.588 iops : min= 434, max= 946, avg=639.40, stdev=128.81, samples=20 00:22:42.588 lat (msec) : 20=0.08%, 50=3.53%, 100=51.42%, 250=44.97% 00:22:42.588 cpu : usr=0.30%, sys=2.12%, ctx=1678, majf=0, minf=4097 00:22:42.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:42.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.588 issued rwts: total=6457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.588 job10: (groupid=0, jobs=1): err= 0: pid=4160688: Sun Jun 9 23:05:08 2024 00:22:42.588 read: IOPS=880, BW=220MiB/s (231MB/s)(2212MiB/10048msec) 00:22:42.588 slat (usec): min=6, max=52134, avg=1092.63, stdev=2954.14 00:22:42.588 clat (msec): min=26, max=155, avg=71.52, stdev=23.27 00:22:42.588 lat (msec): min=26, max=155, avg=72.61, stdev=23.54 00:22:42.588 clat percentiles (msec): 00:22:42.588 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 50], 00:22:42.588 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 78], 00:22:42.588 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 103], 95.00th=[ 113], 00:22:42.588 | 99.00th=[ 131], 99.50th=[ 134], 99.90th=[ 155], 99.95th=[ 155], 00:22:42.588 | 99.99th=[ 157] 00:22:42.588 bw ( KiB/s): min=143360, max=358400, per=10.46%, avg=224870.40, stdev=64517.69, samples=20 00:22:42.588 iops : min= 560, max= 1400, avg=878.40, stdev=252.02, samples=20 00:22:42.588 lat (msec) : 50=21.34%, 100=67.30%, 250=11.36% 00:22:42.588 cpu : usr=0.29%, sys=3.04%, ctx=1938, majf=0, minf=4097 00:22:42.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:42.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.588 issued rwts: total=8847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.588 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.588 00:22:42.588 Run status group 0 (all jobs): 00:22:42.588 READ: bw=2100MiB/s (2202MB/s), 149MiB/s-274MiB/s (157MB/s-287MB/s), io=20.7GiB (22.3GB), run=10044-10110msec 00:22:42.588 00:22:42.588 Disk stats (read/write): 00:22:42.588 nvme0n1: ios=12005/0, merge=0/0, ticks=1245175/0, in_queue=1245175, util=96.50% 00:22:42.588 nvme10n1: ios=14328/0, merge=0/0, ticks=1216248/0, in_queue=1216248, util=96.60% 00:22:42.588 nvme1n1: ios=13685/0, merge=0/0, ticks=1216250/0, in_queue=1216250, util=97.03% 00:22:42.588 nvme2n1: ios=17727/0, merge=0/0, ticks=1222987/0, in_queue=1222987, util=97.21% 00:22:42.588 nvme3n1: ios=13057/0, merge=0/0, ticks=1226256/0, in_queue=1226256, util=97.33% 00:22:42.588 nvme4n1: ios=21682/0, merge=0/0, ticks=1216809/0, in_queue=1216809, util=97.83% 00:22:42.588 nvme5n1: ios=14098/0, merge=0/0, ticks=1219061/0, in_queue=1219061, util=98.00% 00:22:42.588 nvme6n1: ios=13962/0, merge=0/0, ticks=1213068/0, in_queue=1213068, util=98.21% 00:22:42.588 nvme7n1: ios=15697/0, merge=0/0, ticks=1219389/0, in_queue=1219389, util=98.69% 00:22:42.588 nvme8n1: ios=12662/0, merge=0/0, ticks=1215717/0, in_queue=1215717, util=98.98% 00:22:42.588 nvme9n1: ios=17329/0, merge=0/0, ticks=1214272/0, in_queue=1214272, util=99.15% 00:22:42.588 23:05:08 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:42.588 [global] 00:22:42.588 thread=1 00:22:42.588 invalidate=1 00:22:42.588 rw=randwrite 00:22:42.588 time_based=1 00:22:42.588 runtime=10 00:22:42.588 ioengine=libaio 00:22:42.588 direct=1 00:22:42.588 bs=262144 00:22:42.588 iodepth=64 00:22:42.588 norandommap=1 00:22:42.588 numjobs=1 00:22:42.588 00:22:42.588 [job0] 00:22:42.588 filename=/dev/nvme0n1 00:22:42.588 [job1] 00:22:42.588 filename=/dev/nvme10n1 00:22:42.588 [job2] 00:22:42.588 filename=/dev/nvme1n1 00:22:42.588 [job3] 00:22:42.588 filename=/dev/nvme2n1 00:22:42.588 [job4] 00:22:42.588 filename=/dev/nvme3n1 00:22:42.588 [job5] 00:22:42.588 filename=/dev/nvme4n1 00:22:42.588 [job6] 00:22:42.588 filename=/dev/nvme5n1 00:22:42.588 [job7] 00:22:42.588 filename=/dev/nvme6n1 00:22:42.588 [job8] 00:22:42.588 filename=/dev/nvme7n1 00:22:42.588 [job9] 00:22:42.588 filename=/dev/nvme8n1 00:22:42.588 [job10] 00:22:42.588 filename=/dev/nvme9n1 00:22:42.588 Could not set queue depth (nvme0n1) 00:22:42.588 Could not set queue depth (nvme10n1) 00:22:42.588 Could not set queue depth (nvme1n1) 00:22:42.588 Could not set queue depth (nvme2n1) 00:22:42.588 Could not set queue depth (nvme3n1) 00:22:42.588 Could not set queue depth (nvme4n1) 00:22:42.588 Could not set queue depth (nvme5n1) 00:22:42.588 Could not set queue depth (nvme6n1) 00:22:42.588 Could not set queue depth (nvme7n1) 00:22:42.588 Could not set queue depth (nvme8n1) 00:22:42.588 Could not set queue depth (nvme9n1) 00:22:42.588 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.588 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.588 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.588 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.588 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.588 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.588 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.588 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.588 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.588 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.588 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.588 fio-3.35 00:22:42.588 Starting 11 threads 00:22:52.590 00:22:52.590 job0: (groupid=0, jobs=1): err= 0: pid=4162557: Sun Jun 9 23:05:20 2024 00:22:52.590 write: IOPS=534, BW=134MiB/s (140MB/s)(1358MiB/10158msec); 0 zone resets 00:22:52.590 slat (usec): min=26, max=231960, avg=1837.35, stdev=5178.06 00:22:52.590 clat (msec): min=24, max=427, avg=117.80, stdev=48.96 00:22:52.590 lat (msec): min=24, max=427, avg=119.64, stdev=49.42 00:22:52.590 clat percentiles (msec): 00:22:52.590 | 1.00th=[ 66], 5.00th=[ 72], 10.00th=[ 77], 20.00th=[ 84], 00:22:52.590 | 30.00th=[ 90], 40.00th=[ 95], 50.00th=[ 101], 60.00th=[ 109], 00:22:52.590 | 70.00th=[ 122], 80.00th=[ 153], 90.00th=[ 190], 95.00th=[ 215], 00:22:52.590 | 99.00th=[ 296], 99.50th=[ 334], 99.90th=[ 422], 99.95th=[ 422], 00:22:52.590 | 99.99th=[ 426] 00:22:52.590 bw ( KiB/s): min=55406, max=212480, per=11.76%, avg=137400.70, stdev=43731.69, samples=20 00:22:52.590 iops : min= 216, max= 830, avg=536.70, stdev=170.87, samples=20 00:22:52.590 lat (msec) : 50=0.15%, 100=49.23%, 250=48.56%, 500=2.06% 00:22:52.590 cpu : usr=1.18%, sys=1.77%, ctx=1386, majf=0, minf=1 00:22:52.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:52.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.590 issued rwts: total=0,5430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.590 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.590 job1: (groupid=0, jobs=1): err= 0: pid=4162589: Sun Jun 9 23:05:20 2024 00:22:52.590 write: IOPS=291, BW=72.9MiB/s (76.4MB/s)(740MiB/10155msec); 0 zone resets 00:22:52.590 slat (usec): min=26, max=329220, avg=3016.52, stdev=14319.64 00:22:52.590 clat (msec): min=6, max=1162, avg=216.33, stdev=217.88 00:22:52.590 lat (msec): min=6, max=1162, avg=219.34, stdev=220.70 00:22:52.590 clat percentiles (msec): 00:22:52.590 | 1.00th=[ 28], 5.00th=[ 62], 10.00th=[ 67], 20.00th=[ 73], 00:22:52.590 | 30.00th=[ 80], 40.00th=[ 89], 50.00th=[ 128], 60.00th=[ 171], 00:22:52.590 | 70.00th=[ 224], 80.00th=[ 288], 90.00th=[ 527], 95.00th=[ 709], 00:22:52.590 | 99.00th=[ 1036], 99.50th=[ 1062], 99.90th=[ 1133], 99.95th=[ 1167], 00:22:52.590 | 99.99th=[ 1167] 00:22:52.590 bw ( KiB/s): min= 8192, max=229888, per=6.35%, avg=74188.80, stdev=64873.90, samples=20 00:22:52.590 iops : min= 32, max= 898, avg=289.80, stdev=253.41, samples=20 00:22:52.590 lat (msec) : 10=0.14%, 20=0.30%, 50=1.69%, 100=40.56%, 250=30.94% 00:22:52.590 lat (msec) : 500=14.83%, 750=6.96%, 1000=3.14%, 2000=1.45% 00:22:52.590 cpu : usr=0.62%, sys=0.89%, ctx=943, majf=0, minf=1 00:22:52.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:22:52.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.590 issued rwts: total=0,2961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.590 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.590 job2: (groupid=0, jobs=1): err= 0: pid=4162610: Sun Jun 9 23:05:20 2024 00:22:52.590 write: IOPS=474, BW=119MiB/s (124MB/s)(1205MiB/10161msec); 0 zone resets 00:22:52.590 slat (usec): min=20, max=63285, avg=2026.07, stdev=4266.89 00:22:52.590 clat (msec): min=28, max=383, avg=132.78, stdev=46.16 00:22:52.590 lat (msec): min=28, max=383, avg=134.81, stdev=46.68 00:22:52.590 clat percentiles (msec): 00:22:52.590 | 1.00th=[ 46], 5.00th=[ 81], 10.00th=[ 91], 20.00th=[ 103], 00:22:52.590 | 30.00th=[ 110], 40.00th=[ 116], 50.00th=[ 122], 60.00th=[ 129], 00:22:52.590 | 70.00th=[ 140], 80.00th=[ 157], 90.00th=[ 199], 95.00th=[ 236], 00:22:52.590 | 99.00th=[ 284], 99.50th=[ 296], 99.90th=[ 372], 99.95th=[ 372], 00:22:52.590 | 99.99th=[ 384] 00:22:52.590 bw ( KiB/s): min=68608, max=171008, per=10.43%, avg=121804.80, stdev=31462.37, samples=20 00:22:52.590 iops : min= 268, max= 668, avg=475.80, stdev=122.90, samples=20 00:22:52.590 lat (msec) : 50=1.16%, 100=17.11%, 250=78.51%, 500=3.22% 00:22:52.590 cpu : usr=0.94%, sys=1.38%, ctx=1366, majf=0, minf=1 00:22:52.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:52.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.590 issued rwts: total=0,4821,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.590 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.590 job3: (groupid=0, jobs=1): err= 0: pid=4162623: Sun Jun 9 23:05:20 2024 00:22:52.590 write: IOPS=364, BW=91.0MiB/s (95.4MB/s)(928MiB/10193msec); 0 zone resets 00:22:52.590 slat (usec): min=26, max=129067, avg=2609.21, stdev=5626.49 00:22:52.590 clat (msec): min=18, max=480, avg=173.06, stdev=42.65 00:22:52.590 lat (msec): min=18, max=480, avg=175.67, stdev=42.86 00:22:52.590 clat percentiles (msec): 00:22:52.590 | 1.00th=[ 57], 5.00th=[ 123], 10.00th=[ 140], 20.00th=[ 153], 00:22:52.590 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 174], 00:22:52.590 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 209], 95.00th=[ 255], 00:22:52.590 | 99.00th=[ 313], 99.50th=[ 351], 99.90th=[ 447], 99.95th=[ 481], 00:22:52.590 | 99.99th=[ 481] 00:22:52.590 bw ( KiB/s): min=65536, max=110592, per=7.99%, avg=93363.20, stdev=11475.48, samples=20 00:22:52.590 iops : min= 256, max= 432, avg=364.70, stdev=44.83, samples=20 00:22:52.590 lat (msec) : 20=0.11%, 50=0.54%, 100=2.72%, 250=91.57%, 500=5.07% 00:22:52.590 cpu : usr=0.75%, sys=1.17%, ctx=1091, majf=0, minf=1 00:22:52.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:22:52.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.590 issued rwts: total=0,3711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.590 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.590 job4: (groupid=0, jobs=1): err= 0: pid=4162634: Sun Jun 9 23:05:20 2024 00:22:52.590 write: IOPS=514, BW=129MiB/s (135MB/s)(1303MiB/10130msec); 0 zone resets 00:22:52.590 slat (usec): min=26, max=45632, avg=1850.29, stdev=3544.57 00:22:52.590 clat (msec): min=28, max=262, avg=122.51, stdev=23.42 00:22:52.590 lat (msec): min=28, max=262, avg=124.36, stdev=23.48 00:22:52.590 clat percentiles (msec): 00:22:52.590 | 1.00th=[ 70], 5.00th=[ 85], 10.00th=[ 94], 20.00th=[ 106], 00:22:52.590 | 30.00th=[ 112], 40.00th=[ 117], 50.00th=[ 123], 60.00th=[ 129], 00:22:52.590 | 70.00th=[ 134], 80.00th=[ 142], 90.00th=[ 148], 95.00th=[ 157], 00:22:52.590 | 99.00th=[ 178], 99.50th=[ 222], 99.90th=[ 253], 99.95th=[ 259], 00:22:52.590 | 99.99th=[ 264] 00:22:52.590 bw ( KiB/s): min=112640, max=168448, per=11.28%, avg=131788.80, stdev=16320.21, samples=20 00:22:52.590 iops : min= 440, max= 658, avg=514.80, stdev=63.75, samples=20 00:22:52.590 lat (msec) : 50=0.38%, 100=14.14%, 250=85.34%, 500=0.13% 00:22:52.590 cpu : usr=1.20%, sys=1.63%, ctx=1516, majf=0, minf=1 00:22:52.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:52.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.590 issued rwts: total=0,5211,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.590 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.590 job5: (groupid=0, jobs=1): err= 0: pid=4162657: Sun Jun 9 23:05:20 2024 00:22:52.590 write: IOPS=387, BW=96.8MiB/s (102MB/s)(980MiB/10118msec); 0 zone resets 00:22:52.590 slat (usec): min=26, max=120483, avg=2382.69, stdev=6080.34 00:22:52.590 clat (msec): min=10, max=404, avg=162.62, stdev=47.05 00:22:52.590 lat (msec): min=13, max=404, avg=165.01, stdev=47.51 00:22:52.590 clat percentiles (msec): 00:22:52.590 | 1.00th=[ 67], 5.00th=[ 101], 10.00th=[ 108], 20.00th=[ 121], 00:22:52.590 | 30.00th=[ 134], 40.00th=[ 146], 50.00th=[ 159], 60.00th=[ 169], 00:22:52.591 | 70.00th=[ 186], 80.00th=[ 205], 90.00th=[ 228], 95.00th=[ 241], 00:22:52.591 | 99.00th=[ 288], 99.50th=[ 296], 99.90th=[ 305], 99.95th=[ 405], 00:22:52.591 | 99.99th=[ 405] 00:22:52.591 bw ( KiB/s): min=43094, max=144896, per=8.45%, avg=98692.30, stdev=23380.68, samples=20 00:22:52.591 iops : min= 168, max= 566, avg=385.50, stdev=91.37, samples=20 00:22:52.591 lat (msec) : 20=0.08%, 50=0.23%, 100=4.59%, 250=91.35%, 500=3.75% 00:22:52.591 cpu : usr=0.84%, sys=1.16%, ctx=1299, majf=0, minf=1 00:22:52.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:52.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.591 issued rwts: total=0,3918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.591 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.591 job6: (groupid=0, jobs=1): err= 0: pid=4162669: Sun Jun 9 23:05:20 2024 00:22:52.591 write: IOPS=463, BW=116MiB/s (121MB/s)(1167MiB/10082msec); 0 zone resets 00:22:52.591 slat (usec): min=28, max=145387, avg=2086.25, stdev=5240.09 00:22:52.591 clat (msec): min=14, max=335, avg=135.90, stdev=41.49 00:22:52.591 lat (msec): min=15, max=335, avg=137.98, stdev=41.90 00:22:52.591 clat percentiles (msec): 00:22:52.591 | 1.00th=[ 66], 5.00th=[ 85], 10.00th=[ 92], 20.00th=[ 103], 00:22:52.591 | 30.00th=[ 111], 40.00th=[ 118], 50.00th=[ 127], 60.00th=[ 138], 00:22:52.591 | 70.00th=[ 148], 80.00th=[ 174], 90.00th=[ 203], 95.00th=[ 218], 00:22:52.591 | 99.00th=[ 251], 99.50th=[ 253], 99.90th=[ 292], 99.95th=[ 292], 00:22:52.591 | 99.99th=[ 334] 00:22:52.591 bw ( KiB/s): min=64512, max=173056, per=10.09%, avg=117888.00, stdev=33218.33, samples=20 00:22:52.591 iops : min= 252, max= 676, avg=460.50, stdev=129.76, samples=20 00:22:52.591 lat (msec) : 20=0.06%, 50=0.34%, 100=17.74%, 250=80.72%, 500=1.14% 00:22:52.591 cpu : usr=1.24%, sys=1.37%, ctx=1338, majf=0, minf=1 00:22:52.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:52.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.591 issued rwts: total=0,4668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.591 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.591 job7: (groupid=0, jobs=1): err= 0: pid=4162674: Sun Jun 9 23:05:20 2024 00:22:52.591 write: IOPS=423, BW=106MiB/s (111MB/s)(1070MiB/10113msec); 0 zone resets 00:22:52.591 slat (usec): min=20, max=158310, avg=2256.86, stdev=6064.65 00:22:52.591 clat (msec): min=18, max=318, avg=148.95, stdev=40.18 00:22:52.591 lat (msec): min=18, max=318, avg=151.21, stdev=40.36 00:22:52.591 clat percentiles (msec): 00:22:52.591 | 1.00th=[ 64], 5.00th=[ 102], 10.00th=[ 109], 20.00th=[ 120], 00:22:52.591 | 30.00th=[ 128], 40.00th=[ 136], 50.00th=[ 142], 60.00th=[ 148], 00:22:52.591 | 70.00th=[ 161], 80.00th=[ 171], 90.00th=[ 207], 95.00th=[ 224], 00:22:52.591 | 99.00th=[ 275], 99.50th=[ 279], 99.90th=[ 313], 99.95th=[ 321], 00:22:52.591 | 99.99th=[ 321] 00:22:52.591 bw ( KiB/s): min=70656, max=151040, per=9.24%, avg=107904.00, stdev=22996.47, samples=20 00:22:52.591 iops : min= 276, max= 590, avg=421.50, stdev=89.83, samples=20 00:22:52.591 lat (msec) : 20=0.02%, 50=0.61%, 100=3.62%, 250=92.71%, 500=3.04% 00:22:52.591 cpu : usr=0.96%, sys=1.18%, ctx=1248, majf=0, minf=1 00:22:52.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:22:52.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.591 issued rwts: total=0,4278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.591 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.591 job8: (groupid=0, jobs=1): err= 0: pid=4162694: Sun Jun 9 23:05:20 2024 00:22:52.591 write: IOPS=271, BW=68.0MiB/s (71.3MB/s)(695MiB/10226msec); 0 zone resets 00:22:52.591 slat (usec): min=28, max=112301, avg=3454.29, stdev=8876.02 00:22:52.591 clat (msec): min=6, max=865, avg=231.84, stdev=129.56 00:22:52.591 lat (msec): min=8, max=865, avg=235.30, stdev=131.27 00:22:52.591 clat percentiles (msec): 00:22:52.591 | 1.00th=[ 37], 5.00th=[ 56], 10.00th=[ 130], 20.00th=[ 169], 00:22:52.591 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 199], 60.00th=[ 207], 00:22:52.591 | 70.00th=[ 224], 80.00th=[ 284], 90.00th=[ 372], 95.00th=[ 558], 00:22:52.591 | 99.00th=[ 701], 99.50th=[ 726], 99.90th=[ 827], 99.95th=[ 869], 00:22:52.591 | 99.99th=[ 869] 00:22:52.591 bw ( KiB/s): min=20480, max=108544, per=5.95%, avg=69555.20, stdev=25783.81, samples=20 00:22:52.591 iops : min= 80, max= 424, avg=271.70, stdev=100.72, samples=20 00:22:52.591 lat (msec) : 10=0.07%, 20=0.29%, 50=3.74%, 100=3.38%, 250=68.63% 00:22:52.591 lat (msec) : 500=17.84%, 750=5.83%, 1000=0.22% 00:22:52.591 cpu : usr=0.67%, sys=0.69%, ctx=932, majf=0, minf=1 00:22:52.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:22:52.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.591 issued rwts: total=0,2780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.591 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.591 job9: (groupid=0, jobs=1): err= 0: pid=4162707: Sun Jun 9 23:05:20 2024 00:22:52.591 write: IOPS=404, BW=101MiB/s (106MB/s)(1028MiB/10167msec); 0 zone resets 00:22:52.591 slat (usec): min=22, max=44902, avg=2384.37, stdev=4654.24 00:22:52.591 clat (msec): min=47, max=398, avg=155.80, stdev=41.15 00:22:52.591 lat (msec): min=47, max=398, avg=158.18, stdev=41.50 00:22:52.591 clat percentiles (msec): 00:22:52.591 | 1.00th=[ 86], 5.00th=[ 107], 10.00th=[ 118], 20.00th=[ 130], 00:22:52.591 | 30.00th=[ 136], 40.00th=[ 142], 50.00th=[ 148], 60.00th=[ 155], 00:22:52.591 | 70.00th=[ 163], 80.00th=[ 178], 90.00th=[ 203], 95.00th=[ 236], 00:22:52.591 | 99.00th=[ 330], 99.50th=[ 342], 99.90th=[ 380], 99.95th=[ 380], 00:22:52.591 | 99.99th=[ 401] 00:22:52.591 bw ( KiB/s): min=58880, max=147968, per=8.87%, avg=103628.80, stdev=20168.68, samples=20 00:22:52.591 iops : min= 230, max= 578, avg=404.80, stdev=78.78, samples=20 00:22:52.591 lat (msec) : 50=0.10%, 100=2.94%, 250=93.09%, 500=3.87% 00:22:52.591 cpu : usr=0.92%, sys=1.11%, ctx=1156, majf=0, minf=1 00:22:52.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:52.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.591 issued rwts: total=0,4111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.591 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.591 job10: (groupid=0, jobs=1): err= 0: pid=4162717: Sun Jun 9 23:05:20 2024 00:22:52.591 write: IOPS=474, BW=119MiB/s (124MB/s)(1196MiB/10075msec); 0 zone resets 00:22:52.591 slat (usec): min=23, max=122563, avg=2012.55, stdev=5338.09 00:22:52.591 clat (msec): min=21, max=281, avg=132.75, stdev=52.17 00:22:52.591 lat (msec): min=21, max=283, avg=134.77, stdev=52.78 00:22:52.591 clat percentiles (msec): 00:22:52.591 | 1.00th=[ 66], 5.00th=[ 74], 10.00th=[ 78], 20.00th=[ 81], 00:22:52.591 | 30.00th=[ 85], 40.00th=[ 111], 50.00th=[ 127], 60.00th=[ 142], 00:22:52.591 | 70.00th=[ 159], 80.00th=[ 174], 90.00th=[ 215], 95.00th=[ 239], 00:22:52.591 | 99.00th=[ 266], 99.50th=[ 271], 99.90th=[ 279], 99.95th=[ 279], 00:22:52.591 | 99.99th=[ 284] 00:22:52.591 bw ( KiB/s): min=69632, max=205312, per=10.34%, avg=120806.40, stdev=44186.98, samples=20 00:22:52.591 iops : min= 272, max= 802, avg=471.90, stdev=172.61, samples=20 00:22:52.591 lat (msec) : 50=0.38%, 100=33.63%, 250=63.22%, 500=2.78% 00:22:52.591 cpu : usr=0.94%, sys=1.44%, ctx=1393, majf=0, minf=1 00:22:52.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:52.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.591 issued rwts: total=0,4782,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.591 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.591 00:22:52.591 Run status group 0 (all jobs): 00:22:52.591 WRITE: bw=1141MiB/s (1196MB/s), 68.0MiB/s-134MiB/s (71.3MB/s-140MB/s), io=11.4GiB (12.2GB), run=10075-10226msec 00:22:52.591 00:22:52.591 Disk stats (read/write): 00:22:52.591 nvme0n1: ios=49/10781, merge=0/0, ticks=1866/1202378, in_queue=1204244, util=100.00% 00:22:52.591 nvme10n1: ios=47/5871, merge=0/0, ticks=2901/1206269, in_queue=1209170, util=99.97% 00:22:52.591 nvme1n1: ios=49/9582, merge=0/0, ticks=567/1224108, in_queue=1224675, util=99.89% 00:22:52.591 nvme2n1: ios=43/7347, merge=0/0, ticks=1540/1222282, in_queue=1223822, util=99.93% 00:22:52.591 nvme3n1: ios=0/10387, merge=0/0, ticks=0/1227499, in_queue=1227499, util=97.37% 00:22:52.591 nvme4n1: ios=46/7800, merge=0/0, ticks=2220/1225306, in_queue=1227526, util=99.89% 00:22:52.591 nvme5n1: ios=46/9030, merge=0/0, ticks=1824/1192924, in_queue=1194748, util=99.96% 00:22:52.591 nvme6n1: ios=42/8533, merge=0/0, ticks=1689/1217960, in_queue=1219649, util=99.97% 00:22:52.591 nvme7n1: ios=0/5467, merge=0/0, ticks=0/1206300, in_queue=1206300, util=98.69% 00:22:52.591 nvme8n1: ios=35/8155, merge=0/0, ticks=1590/1223886, in_queue=1225476, util=99.94% 00:22:52.591 nvme9n1: ios=41/9186, merge=0/0, ticks=2743/1192825, in_queue=1195568, util=100.00% 00:22:52.591 23:05:20 -- target/multiconnection.sh@36 -- # sync 00:22:52.591 23:05:20 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:52.591 23:05:20 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.591 23:05:20 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:52.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:52.591 23:05:20 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:52.591 23:05:20 -- common/autotest_common.sh@1198 -- # local i=0 00:22:52.591 23:05:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:52.591 23:05:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:22:52.591 23:05:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:52.591 23:05:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:22:52.592 23:05:20 -- common/autotest_common.sh@1210 -- # return 0 00:22:52.592 23:05:20 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.592 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.592 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:22:52.592 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.592 23:05:20 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.592 23:05:20 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:52.853 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:52.853 23:05:20 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:52.853 23:05:20 -- common/autotest_common.sh@1198 -- # local i=0 00:22:52.853 23:05:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:52.853 23:05:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:22:52.853 23:05:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:52.853 23:05:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:22:52.853 23:05:20 -- common/autotest_common.sh@1210 -- # return 0 00:22:52.853 23:05:20 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:52.853 23:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.853 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:22:52.853 23:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.853 23:05:20 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.853 23:05:20 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:53.115 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:53.115 23:05:21 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:53.115 23:05:21 -- common/autotest_common.sh@1198 -- # local i=0 00:22:53.115 23:05:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:53.115 23:05:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:22:53.481 23:05:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:53.481 23:05:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:22:53.481 23:05:21 -- common/autotest_common.sh@1210 -- # return 0 00:22:53.481 23:05:21 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:53.481 23:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:53.481 23:05:21 -- common/autotest_common.sh@10 -- # set +x 00:22:53.481 23:05:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:53.481 23:05:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.481 23:05:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:53.481 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:53.481 23:05:21 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:53.481 23:05:21 -- common/autotest_common.sh@1198 -- # local i=0 00:22:53.481 23:05:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:53.481 23:05:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:22:53.481 23:05:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:53.481 23:05:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:22:53.481 23:05:21 -- common/autotest_common.sh@1210 -- # return 0 00:22:53.481 23:05:21 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:53.481 23:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:53.481 23:05:21 -- common/autotest_common.sh@10 -- # set +x 00:22:53.481 23:05:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:53.481 23:05:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.481 23:05:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:53.740 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:53.740 23:05:21 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:53.740 23:05:21 -- common/autotest_common.sh@1198 -- # local i=0 00:22:53.740 23:05:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:53.740 23:05:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:22:53.740 23:05:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:22:53.740 23:05:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.000 23:05:21 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.000 23:05:21 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:54.000 23:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.000 23:05:21 -- common/autotest_common.sh@10 -- # set +x 00:22:54.000 23:05:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.000 23:05:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.000 23:05:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:54.000 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:54.000 23:05:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:54.000 23:05:22 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.000 23:05:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.000 23:05:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:22:54.000 23:05:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.000 23:05:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:22:54.000 23:05:22 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.000 23:05:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:54.000 23:05:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.000 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:22:54.000 23:05:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.001 23:05:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.001 23:05:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:54.259 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:54.259 23:05:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:54.259 23:05:22 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.259 23:05:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.259 23:05:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:22:54.259 23:05:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:22:54.259 23:05:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.519 23:05:22 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.519 23:05:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:54.519 23:05:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.519 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:22:54.519 23:05:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.519 23:05:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.519 23:05:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:54.519 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:54.519 23:05:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:54.519 23:05:22 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.519 23:05:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.519 23:05:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:22:54.519 23:05:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.519 23:05:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:22:54.519 23:05:22 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.519 23:05:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:54.519 23:05:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.519 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:22:54.780 23:05:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.780 23:05:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.780 23:05:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:54.780 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:54.780 23:05:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:54.780 23:05:22 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.780 23:05:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.780 23:05:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:22:54.780 23:05:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.780 23:05:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:22:54.780 23:05:22 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.780 23:05:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:54.780 23:05:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.780 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:22:54.780 23:05:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.780 23:05:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.780 23:05:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:55.042 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:55.042 23:05:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:55.042 23:05:23 -- common/autotest_common.sh@1198 -- # local i=0 00:22:55.042 23:05:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:55.042 23:05:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:22:55.042 23:05:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:55.042 23:05:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:22:55.042 23:05:23 -- common/autotest_common.sh@1210 -- # return 0 00:22:55.042 23:05:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:55.042 23:05:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.042 23:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:55.042 23:05:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.042 23:05:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.042 23:05:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:55.042 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:55.042 23:05:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:55.042 23:05:23 -- common/autotest_common.sh@1198 -- # local i=0 00:22:55.042 23:05:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:55.042 23:05:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:22:55.042 23:05:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:55.042 23:05:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:22:55.042 23:05:23 -- common/autotest_common.sh@1210 -- # return 0 00:22:55.042 23:05:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:55.042 23:05:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.042 23:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:55.304 23:05:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.304 23:05:23 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:55.304 23:05:23 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:55.304 23:05:23 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:55.304 23:05:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:55.304 23:05:23 -- nvmf/common.sh@116 -- # sync 00:22:55.304 23:05:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:55.304 23:05:23 -- nvmf/common.sh@119 -- # set +e 00:22:55.304 23:05:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:55.304 23:05:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:55.304 rmmod nvme_tcp 00:22:55.304 rmmod nvme_fabrics 00:22:55.304 rmmod nvme_keyring 00:22:55.304 23:05:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:55.304 23:05:23 -- nvmf/common.sh@123 -- # set -e 00:22:55.304 23:05:23 -- nvmf/common.sh@124 -- # return 0 00:22:55.304 23:05:23 -- nvmf/common.sh@477 -- # '[' -n 4151798 ']' 00:22:55.304 23:05:23 -- nvmf/common.sh@478 -- # killprocess 4151798 00:22:55.304 23:05:23 -- common/autotest_common.sh@926 -- # '[' -z 4151798 ']' 00:22:55.304 23:05:23 -- common/autotest_common.sh@930 -- # kill -0 4151798 00:22:55.304 23:05:23 -- common/autotest_common.sh@931 -- # uname 00:22:55.304 23:05:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:55.304 23:05:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4151798 00:22:55.304 23:05:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:55.304 23:05:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:55.304 23:05:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4151798' 00:22:55.304 killing process with pid 4151798 00:22:55.304 23:05:23 -- common/autotest_common.sh@945 -- # kill 4151798 00:22:55.304 23:05:23 -- common/autotest_common.sh@950 -- # wait 4151798 00:22:55.566 23:05:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:55.566 23:05:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:55.566 23:05:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:55.566 23:05:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:55.566 23:05:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:55.566 23:05:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.566 23:05:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.566 23:05:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.108 23:05:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:58.108 00:22:58.108 real 1m16.999s 00:22:58.108 user 5m1.951s 00:22:58.108 sys 0m18.588s 00:22:58.108 23:05:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:58.108 23:05:25 -- common/autotest_common.sh@10 -- # set +x 00:22:58.108 ************************************ 00:22:58.108 END TEST nvmf_multiconnection 00:22:58.108 ************************************ 00:22:58.108 23:05:25 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:58.108 23:05:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:58.108 23:05:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:58.108 23:05:25 -- common/autotest_common.sh@10 -- # set +x 00:22:58.108 ************************************ 00:22:58.108 START TEST nvmf_initiator_timeout 00:22:58.108 ************************************ 00:22:58.108 23:05:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:58.108 * Looking for test storage... 00:22:58.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:58.108 23:05:25 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:58.108 23:05:25 -- nvmf/common.sh@7 -- # uname -s 00:22:58.108 23:05:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.108 23:05:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.108 23:05:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.108 23:05:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.108 23:05:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.108 23:05:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.108 23:05:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.108 23:05:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.108 23:05:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.108 23:05:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.108 23:05:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.108 23:05:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.108 23:05:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.108 23:05:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.108 23:05:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:58.108 23:05:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:58.108 23:05:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.108 23:05:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.108 23:05:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.108 23:05:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.108 23:05:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.108 23:05:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.108 23:05:25 -- paths/export.sh@5 -- # export PATH 00:22:58.108 23:05:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.108 23:05:25 -- nvmf/common.sh@46 -- # : 0 00:22:58.108 23:05:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:58.109 23:05:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:58.109 23:05:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:58.109 23:05:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.109 23:05:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.109 23:05:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:58.109 23:05:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:58.109 23:05:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:58.109 23:05:25 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:58.109 23:05:25 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:58.109 23:05:25 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:58.109 23:05:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:58.109 23:05:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.109 23:05:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:58.109 23:05:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:58.109 23:05:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:58.109 23:05:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.109 23:05:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.109 23:05:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.109 23:05:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:58.109 23:05:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:58.109 23:05:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:58.109 23:05:25 -- common/autotest_common.sh@10 -- # set +x 00:23:04.694 23:05:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:04.694 23:05:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:04.694 23:05:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:04.694 23:05:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:04.694 23:05:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:04.694 23:05:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:04.694 23:05:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:04.694 23:05:32 -- nvmf/common.sh@294 -- # net_devs=() 00:23:04.694 23:05:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:04.694 23:05:32 -- nvmf/common.sh@295 -- # e810=() 00:23:04.694 23:05:32 -- nvmf/common.sh@295 -- # local -ga e810 00:23:04.694 23:05:32 -- nvmf/common.sh@296 -- # x722=() 00:23:04.694 23:05:32 -- nvmf/common.sh@296 -- # local -ga x722 00:23:04.694 23:05:32 -- nvmf/common.sh@297 -- # mlx=() 00:23:04.694 23:05:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:04.694 23:05:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.694 23:05:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.694 23:05:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.694 23:05:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.694 23:05:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.694 23:05:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.694 23:05:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.694 23:05:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.694 23:05:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.694 23:05:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.694 23:05:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.694 23:05:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:04.694 23:05:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:04.694 23:05:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:04.694 23:05:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:04.694 23:05:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:04.694 23:05:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:04.694 23:05:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:04.694 23:05:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:04.694 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:04.694 23:05:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:04.694 23:05:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:04.694 23:05:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.695 23:05:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.695 23:05:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:04.695 23:05:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:04.695 23:05:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:04.695 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:04.695 23:05:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:04.695 23:05:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:04.695 23:05:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.695 23:05:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.695 23:05:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:04.695 23:05:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:04.695 23:05:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:04.695 23:05:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:04.695 23:05:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:04.695 23:05:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.695 23:05:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:04.695 23:05:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.695 23:05:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:04.695 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:04.695 23:05:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.695 23:05:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:04.695 23:05:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.695 23:05:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:04.695 23:05:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.695 23:05:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:04.695 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:04.695 23:05:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.695 23:05:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:04.695 23:05:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:04.695 23:05:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:04.695 23:05:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:04.695 23:05:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:04.695 23:05:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.695 23:05:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.695 23:05:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.695 23:05:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:04.695 23:05:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.695 23:05:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.695 23:05:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:04.695 23:05:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.695 23:05:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.695 23:05:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:04.695 23:05:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:04.695 23:05:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.695 23:05:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.695 23:05:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.695 23:05:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.695 23:05:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:04.695 23:05:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.956 23:05:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.956 23:05:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.956 23:05:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:04.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:23:04.956 00:23:04.956 --- 10.0.0.2 ping statistics --- 00:23:04.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.956 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:23:04.956 23:05:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.411 ms 00:23:04.956 00:23:04.956 --- 10.0.0.1 ping statistics --- 00:23:04.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.956 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:23:04.956 23:05:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.956 23:05:32 -- nvmf/common.sh@410 -- # return 0 00:23:04.956 23:05:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:04.956 23:05:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.956 23:05:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:04.956 23:05:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:04.956 23:05:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.956 23:05:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:04.956 23:05:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:04.956 23:05:33 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:04.956 23:05:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:04.956 23:05:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:04.956 23:05:33 -- common/autotest_common.sh@10 -- # set +x 00:23:04.956 23:05:33 -- nvmf/common.sh@469 -- # nvmfpid=4169143 00:23:04.956 23:05:33 -- nvmf/common.sh@470 -- # waitforlisten 4169143 00:23:04.956 23:05:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:04.957 23:05:33 -- common/autotest_common.sh@819 -- # '[' -z 4169143 ']' 00:23:04.957 23:05:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.957 23:05:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:04.957 23:05:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.957 23:05:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:04.957 23:05:33 -- common/autotest_common.sh@10 -- # set +x 00:23:04.957 [2024-06-09 23:05:33.064729] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:04.957 [2024-06-09 23:05:33.064795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.957 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.957 [2024-06-09 23:05:33.134254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:05.218 [2024-06-09 23:05:33.207096] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:05.218 [2024-06-09 23:05:33.207230] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.218 [2024-06-09 23:05:33.207240] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.218 [2024-06-09 23:05:33.207249] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.218 [2024-06-09 23:05:33.207386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.218 [2024-06-09 23:05:33.207528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.218 [2024-06-09 23:05:33.207766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:05.218 [2024-06-09 23:05:33.207767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.790 23:05:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:05.790 23:05:33 -- common/autotest_common.sh@852 -- # return 0 00:23:05.790 23:05:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:05.790 23:05:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:05.790 23:05:33 -- common/autotest_common.sh@10 -- # set +x 00:23:05.790 23:05:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.790 23:05:33 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:05.790 23:05:33 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:05.790 23:05:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.790 23:05:33 -- common/autotest_common.sh@10 -- # set +x 00:23:05.790 Malloc0 00:23:05.790 23:05:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.790 23:05:33 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:05.790 23:05:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.790 23:05:33 -- common/autotest_common.sh@10 -- # set +x 00:23:05.790 Delay0 00:23:05.790 23:05:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.790 23:05:33 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:05.790 23:05:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.790 23:05:33 -- common/autotest_common.sh@10 -- # set +x 00:23:05.790 [2024-06-09 23:05:33.914702] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.790 23:05:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.790 23:05:33 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:05.790 23:05:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.790 23:05:33 -- common/autotest_common.sh@10 -- # set +x 00:23:05.791 23:05:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.791 23:05:33 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:05.791 23:05:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.791 23:05:33 -- common/autotest_common.sh@10 -- # set +x 00:23:05.791 23:05:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.791 23:05:33 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:05.791 23:05:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.791 23:05:33 -- common/autotest_common.sh@10 -- # set +x 00:23:05.791 [2024-06-09 23:05:33.954990] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.791 23:05:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.791 23:05:33 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:07.703 23:05:35 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:07.703 23:05:35 -- common/autotest_common.sh@1177 -- # local i=0 00:23:07.703 23:05:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:07.703 23:05:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:07.703 23:05:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:09.634 23:05:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:09.634 23:05:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:09.634 23:05:37 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:09.634 23:05:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:09.634 23:05:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:09.634 23:05:37 -- common/autotest_common.sh@1187 -- # return 0 00:23:09.634 23:05:37 -- target/initiator_timeout.sh@35 -- # fio_pid=4170128 00:23:09.634 23:05:37 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:09.634 23:05:37 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:09.634 [global] 00:23:09.634 thread=1 00:23:09.634 invalidate=1 00:23:09.634 rw=write 00:23:09.634 time_based=1 00:23:09.634 runtime=60 00:23:09.634 ioengine=libaio 00:23:09.634 direct=1 00:23:09.634 bs=4096 00:23:09.634 iodepth=1 00:23:09.634 norandommap=0 00:23:09.634 numjobs=1 00:23:09.634 00:23:09.634 verify_dump=1 00:23:09.634 verify_backlog=512 00:23:09.634 verify_state_save=0 00:23:09.634 do_verify=1 00:23:09.634 verify=crc32c-intel 00:23:09.634 [job0] 00:23:09.634 filename=/dev/nvme0n1 00:23:09.634 Could not set queue depth (nvme0n1) 00:23:09.894 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:09.894 fio-3.35 00:23:09.894 Starting 1 thread 00:23:12.440 23:05:40 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:12.440 23:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.440 23:05:40 -- common/autotest_common.sh@10 -- # set +x 00:23:12.440 true 00:23:12.440 23:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.440 23:05:40 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:12.440 23:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.440 23:05:40 -- common/autotest_common.sh@10 -- # set +x 00:23:12.440 true 00:23:12.440 23:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.440 23:05:40 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:12.440 23:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.440 23:05:40 -- common/autotest_common.sh@10 -- # set +x 00:23:12.440 true 00:23:12.440 23:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.440 23:05:40 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:12.440 23:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:12.440 23:05:40 -- common/autotest_common.sh@10 -- # set +x 00:23:12.440 true 00:23:12.440 23:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:12.440 23:05:40 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:15.742 23:05:43 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:15.742 23:05:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:15.742 23:05:43 -- common/autotest_common.sh@10 -- # set +x 00:23:15.742 true 00:23:15.742 23:05:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:15.742 23:05:43 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:15.742 23:05:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:15.742 23:05:43 -- common/autotest_common.sh@10 -- # set +x 00:23:15.742 true 00:23:15.742 23:05:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:15.742 23:05:43 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:15.742 23:05:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:15.742 23:05:43 -- common/autotest_common.sh@10 -- # set +x 00:23:15.742 true 00:23:15.742 23:05:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:15.742 23:05:43 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:15.742 23:05:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:15.742 23:05:43 -- common/autotest_common.sh@10 -- # set +x 00:23:15.742 true 00:23:15.742 23:05:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:15.742 23:05:43 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:15.742 23:05:43 -- target/initiator_timeout.sh@54 -- # wait 4170128 00:24:12.009 00:24:12.009 job0: (groupid=0, jobs=1): err= 0: pid=4170347: Sun Jun 9 23:06:37 2024 00:24:12.009 read: IOPS=108, BW=435KiB/s (445kB/s)(25.5MiB/60001msec) 00:24:12.009 slat (usec): min=7, max=11773, avg=28.34, stdev=182.07 00:24:12.009 clat (usec): min=776, max=42210k, avg=8146.19, stdev=522778.20 00:24:12.009 lat (usec): min=791, max=42210k, avg=8174.53, stdev=522778.18 00:24:12.009 clat percentiles (usec): 00:24:12.009 | 1.00th=[ 1156], 5.00th=[ 1254], 10.00th=[ 1287], 00:24:12.009 | 20.00th=[ 1319], 30.00th=[ 1336], 40.00th=[ 1352], 00:24:12.009 | 50.00th=[ 1352], 60.00th=[ 1369], 70.00th=[ 1385], 00:24:12.009 | 80.00th=[ 1418], 90.00th=[ 1598], 95.00th=[ 1647], 00:24:12.009 | 99.00th=[ 1713], 99.50th=[ 42206], 99.90th=[ 42206], 00:24:12.009 | 99.95th=[ 42206], 99.99th=[17112761] 00:24:12.009 write: IOPS=110, BW=444KiB/s (454kB/s)(26.0MiB/60001msec); 0 zone resets 00:24:12.009 slat (nsec): min=9613, max=68757, avg=32548.28, stdev=2263.80 00:24:12.009 clat (usec): min=572, max=1207, avg=958.78, stdev=58.25 00:24:12.009 lat (usec): min=584, max=1255, avg=991.33, stdev=58.28 00:24:12.009 clat percentiles (usec): 00:24:12.009 | 1.00th=[ 775], 5.00th=[ 857], 10.00th=[ 889], 20.00th=[ 914], 00:24:12.009 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[ 979], 60.00th=[ 988], 00:24:12.009 | 70.00th=[ 988], 80.00th=[ 996], 90.00th=[ 1020], 95.00th=[ 1037], 00:24:12.009 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1156], 99.95th=[ 1188], 00:24:12.009 | 99.99th=[ 1205] 00:24:12.009 bw ( KiB/s): min= 88, max= 4008, per=100.00%, avg=2048.00, stdev=1283.00, samples=26 00:24:12.009 iops : min= 22, max= 1002, avg=512.00, stdev=320.75, samples=26 00:24:12.009 lat (usec) : 750=0.14%, 1000=40.75% 00:24:12.009 lat (msec) : 2=58.76%, 50=0.35%, >=2000=0.01% 00:24:12.009 cpu : usr=0.34%, sys=0.68%, ctx=13179, majf=0, minf=1 00:24:12.009 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:12.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:12.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:12.009 issued rwts: total=6519,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:12.009 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:12.009 00:24:12.009 Run status group 0 (all jobs): 00:24:12.009 READ: bw=435KiB/s (445kB/s), 435KiB/s-435KiB/s (445kB/s-445kB/s), io=25.5MiB (26.7MB), run=60001-60001msec 00:24:12.009 WRITE: bw=444KiB/s (454kB/s), 444KiB/s-444KiB/s (454kB/s-454kB/s), io=26.0MiB (27.3MB), run=60001-60001msec 00:24:12.009 00:24:12.010 Disk stats (read/write): 00:24:12.010 nvme0n1: ios=6544/6656, merge=0/0, ticks=10933/6468, in_queue=17401, util=99.75% 00:24:12.010 23:06:37 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:12.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:12.010 23:06:38 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:12.010 23:06:38 -- common/autotest_common.sh@1198 -- # local i=0 00:24:12.010 23:06:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:12.010 23:06:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:12.010 23:06:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:12.010 23:06:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:12.010 23:06:38 -- common/autotest_common.sh@1210 -- # return 0 00:24:12.010 23:06:38 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:12.010 23:06:38 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:12.010 nvmf hotplug test: fio successful as expected 00:24:12.010 23:06:38 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:12.010 23:06:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:12.010 23:06:38 -- common/autotest_common.sh@10 -- # set +x 00:24:12.010 23:06:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:12.010 23:06:38 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:12.010 23:06:38 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:12.010 23:06:38 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:12.010 23:06:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:12.010 23:06:38 -- nvmf/common.sh@116 -- # sync 00:24:12.010 23:06:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:12.010 23:06:38 -- nvmf/common.sh@119 -- # set +e 00:24:12.010 23:06:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:12.010 23:06:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:12.010 rmmod nvme_tcp 00:24:12.010 rmmod nvme_fabrics 00:24:12.010 rmmod nvme_keyring 00:24:12.010 23:06:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:12.010 23:06:38 -- nvmf/common.sh@123 -- # set -e 00:24:12.010 23:06:38 -- nvmf/common.sh@124 -- # return 0 00:24:12.010 23:06:38 -- nvmf/common.sh@477 -- # '[' -n 4169143 ']' 00:24:12.010 23:06:38 -- nvmf/common.sh@478 -- # killprocess 4169143 00:24:12.010 23:06:38 -- common/autotest_common.sh@926 -- # '[' -z 4169143 ']' 00:24:12.010 23:06:38 -- common/autotest_common.sh@930 -- # kill -0 4169143 00:24:12.010 23:06:38 -- common/autotest_common.sh@931 -- # uname 00:24:12.010 23:06:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:12.010 23:06:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4169143 00:24:12.010 23:06:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:12.010 23:06:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:12.010 23:06:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4169143' 00:24:12.010 killing process with pid 4169143 00:24:12.010 23:06:38 -- common/autotest_common.sh@945 -- # kill 4169143 00:24:12.010 23:06:38 -- common/autotest_common.sh@950 -- # wait 4169143 00:24:12.010 23:06:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:12.010 23:06:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:12.010 23:06:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:12.010 23:06:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.010 23:06:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:12.010 23:06:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.010 23:06:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.010 23:06:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.583 23:06:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:12.583 00:24:12.583 real 1m14.759s 00:24:12.583 user 4m37.428s 00:24:12.583 sys 0m7.086s 00:24:12.583 23:06:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.583 23:06:40 -- common/autotest_common.sh@10 -- # set +x 00:24:12.583 ************************************ 00:24:12.583 END TEST nvmf_initiator_timeout 00:24:12.583 ************************************ 00:24:12.583 23:06:40 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:24:12.583 23:06:40 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:24:12.583 23:06:40 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:24:12.583 23:06:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:12.583 23:06:40 -- common/autotest_common.sh@10 -- # set +x 00:24:19.175 23:06:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:19.175 23:06:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:19.175 23:06:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:19.175 23:06:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:19.175 23:06:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:19.175 23:06:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:19.175 23:06:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:19.175 23:06:46 -- nvmf/common.sh@294 -- # net_devs=() 00:24:19.175 23:06:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:19.175 23:06:46 -- nvmf/common.sh@295 -- # e810=() 00:24:19.175 23:06:46 -- nvmf/common.sh@295 -- # local -ga e810 00:24:19.175 23:06:46 -- nvmf/common.sh@296 -- # x722=() 00:24:19.175 23:06:46 -- nvmf/common.sh@296 -- # local -ga x722 00:24:19.175 23:06:46 -- nvmf/common.sh@297 -- # mlx=() 00:24:19.175 23:06:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:19.175 23:06:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.175 23:06:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.175 23:06:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.175 23:06:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.175 23:06:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.175 23:06:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.175 23:06:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.175 23:06:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.175 23:06:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.175 23:06:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.175 23:06:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.175 23:06:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:19.175 23:06:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:19.175 23:06:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:19.175 23:06:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:19.175 23:06:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:19.175 23:06:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:19.175 23:06:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:19.175 23:06:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:19.175 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:19.175 23:06:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:19.175 23:06:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:19.175 23:06:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.175 23:06:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.175 23:06:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:19.175 23:06:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:19.175 23:06:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:19.175 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:19.175 23:06:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:19.175 23:06:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:19.175 23:06:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.175 23:06:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.175 23:06:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:19.175 23:06:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:19.175 23:06:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:19.175 23:06:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:19.175 23:06:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:19.175 23:06:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.175 23:06:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:19.175 23:06:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.175 23:06:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:19.175 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:19.175 23:06:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.175 23:06:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:19.175 23:06:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.175 23:06:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:19.175 23:06:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.175 23:06:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:19.175 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:19.175 23:06:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.175 23:06:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:19.175 23:06:46 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.175 23:06:46 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:24:19.175 23:06:46 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:19.175 23:06:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:19.175 23:06:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:19.175 23:06:46 -- common/autotest_common.sh@10 -- # set +x 00:24:19.175 ************************************ 00:24:19.175 START TEST nvmf_perf_adq 00:24:19.175 ************************************ 00:24:19.175 23:06:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:19.175 * Looking for test storage... 00:24:19.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:19.175 23:06:47 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.175 23:06:47 -- nvmf/common.sh@7 -- # uname -s 00:24:19.175 23:06:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.175 23:06:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.175 23:06:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.176 23:06:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.176 23:06:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.176 23:06:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.176 23:06:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.176 23:06:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.176 23:06:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.176 23:06:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.176 23:06:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.176 23:06:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.176 23:06:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.176 23:06:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.176 23:06:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.176 23:06:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.176 23:06:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.176 23:06:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.176 23:06:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.176 23:06:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.176 23:06:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.176 23:06:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.176 23:06:47 -- paths/export.sh@5 -- # export PATH 00:24:19.176 23:06:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.176 23:06:47 -- nvmf/common.sh@46 -- # : 0 00:24:19.176 23:06:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:19.176 23:06:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:19.176 23:06:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:19.176 23:06:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.176 23:06:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.176 23:06:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:19.176 23:06:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:19.176 23:06:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:19.176 23:06:47 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:19.176 23:06:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:19.176 23:06:47 -- common/autotest_common.sh@10 -- # set +x 00:24:25.774 23:06:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:25.774 23:06:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:25.774 23:06:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:25.774 23:06:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:25.774 23:06:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:25.774 23:06:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:25.774 23:06:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:25.774 23:06:53 -- nvmf/common.sh@294 -- # net_devs=() 00:24:25.774 23:06:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:25.774 23:06:53 -- nvmf/common.sh@295 -- # e810=() 00:24:25.774 23:06:53 -- nvmf/common.sh@295 -- # local -ga e810 00:24:25.774 23:06:53 -- nvmf/common.sh@296 -- # x722=() 00:24:25.774 23:06:53 -- nvmf/common.sh@296 -- # local -ga x722 00:24:25.774 23:06:53 -- nvmf/common.sh@297 -- # mlx=() 00:24:25.774 23:06:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:25.774 23:06:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:25.774 23:06:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:25.774 23:06:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:25.774 23:06:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:25.774 23:06:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:25.774 23:06:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:25.774 23:06:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:25.774 23:06:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:25.774 23:06:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:25.774 23:06:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:25.774 23:06:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:25.774 23:06:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:25.774 23:06:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:25.774 23:06:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:25.774 23:06:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:25.774 23:06:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:25.774 23:06:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:25.774 23:06:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:25.774 23:06:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:25.774 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:25.774 23:06:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:25.774 23:06:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:25.774 23:06:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.774 23:06:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.774 23:06:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:25.774 23:06:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:25.774 23:06:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:25.774 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:25.774 23:06:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:25.774 23:06:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:25.774 23:06:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.774 23:06:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.774 23:06:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:25.774 23:06:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:25.774 23:06:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:25.774 23:06:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:25.774 23:06:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:25.774 23:06:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.774 23:06:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:25.774 23:06:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.774 23:06:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:25.774 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:25.774 23:06:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.774 23:06:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:25.774 23:06:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.774 23:06:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:25.774 23:06:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.774 23:06:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:25.774 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:25.774 23:06:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.774 23:06:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:25.774 23:06:53 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:25.774 23:06:53 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:25.774 23:06:53 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:25.774 23:06:53 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:24:25.774 23:06:53 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:27.237 23:06:55 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:29.155 23:06:57 -- target/perf_adq.sh@54 -- # sleep 5 00:24:34.448 23:07:02 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:34.448 23:07:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:34.448 23:07:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.448 23:07:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:34.448 23:07:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:34.448 23:07:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:34.448 23:07:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.448 23:07:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.449 23:07:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.449 23:07:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:34.449 23:07:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:34.449 23:07:02 -- common/autotest_common.sh@10 -- # set +x 00:24:34.449 23:07:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:34.449 23:07:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:34.449 23:07:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:34.449 23:07:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:34.449 23:07:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:34.449 23:07:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:34.449 23:07:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:34.449 23:07:02 -- nvmf/common.sh@294 -- # net_devs=() 00:24:34.449 23:07:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:34.449 23:07:02 -- nvmf/common.sh@295 -- # e810=() 00:24:34.449 23:07:02 -- nvmf/common.sh@295 -- # local -ga e810 00:24:34.449 23:07:02 -- nvmf/common.sh@296 -- # x722=() 00:24:34.449 23:07:02 -- nvmf/common.sh@296 -- # local -ga x722 00:24:34.449 23:07:02 -- nvmf/common.sh@297 -- # mlx=() 00:24:34.449 23:07:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:34.449 23:07:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.449 23:07:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.449 23:07:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.449 23:07:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.449 23:07:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.449 23:07:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.449 23:07:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.449 23:07:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.449 23:07:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.449 23:07:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.449 23:07:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.449 23:07:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:34.449 23:07:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:34.449 23:07:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:34.449 23:07:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:34.449 23:07:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:34.449 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:34.449 23:07:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:34.449 23:07:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:34.449 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:34.449 23:07:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:34.449 23:07:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:34.449 23:07:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.449 23:07:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:34.449 23:07:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.449 23:07:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:34.449 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:34.449 23:07:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.449 23:07:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:34.449 23:07:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.449 23:07:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:34.449 23:07:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.449 23:07:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:34.449 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:34.449 23:07:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.449 23:07:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:34.449 23:07:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:34.449 23:07:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:34.449 23:07:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:34.449 23:07:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.449 23:07:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.449 23:07:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.449 23:07:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:34.449 23:07:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.449 23:07:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.449 23:07:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:34.449 23:07:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.449 23:07:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.449 23:07:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:34.449 23:07:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:34.449 23:07:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.449 23:07:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.449 23:07:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.449 23:07:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.449 23:07:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:34.449 23:07:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.449 23:07:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.449 23:07:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.709 23:07:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:34.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:24:34.709 00:24:34.709 --- 10.0.0.2 ping statistics --- 00:24:34.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.709 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:24:34.709 23:07:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.423 ms 00:24:34.709 00:24:34.709 --- 10.0.0.1 ping statistics --- 00:24:34.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.709 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:24:34.709 23:07:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.709 23:07:02 -- nvmf/common.sh@410 -- # return 0 00:24:34.709 23:07:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:34.709 23:07:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.709 23:07:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:34.709 23:07:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:34.709 23:07:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.709 23:07:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:34.709 23:07:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:34.709 23:07:02 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:34.709 23:07:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:34.709 23:07:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:34.709 23:07:02 -- common/autotest_common.sh@10 -- # set +x 00:24:34.709 23:07:02 -- nvmf/common.sh@469 -- # nvmfpid=4192100 00:24:34.709 23:07:02 -- nvmf/common.sh@470 -- # waitforlisten 4192100 00:24:34.709 23:07:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:34.709 23:07:02 -- common/autotest_common.sh@819 -- # '[' -z 4192100 ']' 00:24:34.709 23:07:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.709 23:07:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:34.709 23:07:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.709 23:07:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:34.709 23:07:02 -- common/autotest_common.sh@10 -- # set +x 00:24:34.709 [2024-06-09 23:07:02.767433] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:34.709 [2024-06-09 23:07:02.767520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.709 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.709 [2024-06-09 23:07:02.837798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:34.969 [2024-06-09 23:07:02.910595] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:34.969 [2024-06-09 23:07:02.910731] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.969 [2024-06-09 23:07:02.910741] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.969 [2024-06-09 23:07:02.910751] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.969 [2024-06-09 23:07:02.910886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.969 [2024-06-09 23:07:02.911003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.969 [2024-06-09 23:07:02.911162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.969 [2024-06-09 23:07:02.911163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:35.539 23:07:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:35.539 23:07:03 -- common/autotest_common.sh@852 -- # return 0 00:24:35.539 23:07:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:35.539 23:07:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:35.539 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:24:35.539 23:07:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.539 23:07:03 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:35.539 23:07:03 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:35.539 23:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.539 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:24:35.539 23:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:35.539 23:07:03 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:35.539 23:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.539 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:24:35.539 23:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:35.539 23:07:03 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:35.539 23:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.539 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:24:35.539 [2024-06-09 23:07:03.663351] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.539 23:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:35.539 23:07:03 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:35.539 23:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.539 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:24:35.539 Malloc1 00:24:35.539 23:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:35.539 23:07:03 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:35.539 23:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.539 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:24:35.539 23:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:35.539 23:07:03 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:35.539 23:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.539 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:24:35.539 23:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:35.539 23:07:03 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:35.539 23:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.539 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:24:35.800 [2024-06-09 23:07:03.718709] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.800 23:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:35.800 23:07:03 -- target/perf_adq.sh@73 -- # perfpid=4192447 00:24:35.800 23:07:03 -- target/perf_adq.sh@74 -- # sleep 2 00:24:35.800 23:07:03 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:35.800 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.712 23:07:05 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:37.712 23:07:05 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:37.712 23:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.712 23:07:05 -- target/perf_adq.sh@76 -- # wc -l 00:24:37.712 23:07:05 -- common/autotest_common.sh@10 -- # set +x 00:24:37.712 23:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.712 23:07:05 -- target/perf_adq.sh@76 -- # count=4 00:24:37.712 23:07:05 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:37.712 23:07:05 -- target/perf_adq.sh@81 -- # wait 4192447 00:24:45.845 Initializing NVMe Controllers 00:24:45.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:45.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:45.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:45.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:45.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:45.845 Initialization complete. Launching workers. 00:24:45.845 ======================================================== 00:24:45.845 Latency(us) 00:24:45.845 Device Information : IOPS MiB/s Average min max 00:24:45.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11067.36 43.23 5802.18 1345.85 46551.11 00:24:45.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15666.44 61.20 4084.53 1287.76 9948.69 00:24:45.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 15360.14 60.00 4166.08 1240.68 14008.82 00:24:45.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11684.95 45.64 5477.77 1169.80 15875.13 00:24:45.845 ======================================================== 00:24:45.845 Total : 53778.88 210.07 4764.02 1169.80 46551.11 00:24:45.845 00:24:45.845 23:07:13 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:45.845 23:07:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:45.845 23:07:13 -- nvmf/common.sh@116 -- # sync 00:24:45.845 23:07:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:45.845 23:07:13 -- nvmf/common.sh@119 -- # set +e 00:24:45.845 23:07:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:45.845 23:07:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:45.845 rmmod nvme_tcp 00:24:45.845 rmmod nvme_fabrics 00:24:45.845 rmmod nvme_keyring 00:24:45.845 23:07:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:45.845 23:07:13 -- nvmf/common.sh@123 -- # set -e 00:24:45.845 23:07:13 -- nvmf/common.sh@124 -- # return 0 00:24:45.845 23:07:13 -- nvmf/common.sh@477 -- # '[' -n 4192100 ']' 00:24:45.845 23:07:13 -- nvmf/common.sh@478 -- # killprocess 4192100 00:24:45.845 23:07:13 -- common/autotest_common.sh@926 -- # '[' -z 4192100 ']' 00:24:45.845 23:07:13 -- common/autotest_common.sh@930 -- # kill -0 4192100 00:24:45.845 23:07:14 -- common/autotest_common.sh@931 -- # uname 00:24:45.845 23:07:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:45.845 23:07:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 4192100 00:24:46.106 23:07:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:46.106 23:07:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:46.106 23:07:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 4192100' 00:24:46.106 killing process with pid 4192100 00:24:46.106 23:07:14 -- common/autotest_common.sh@945 -- # kill 4192100 00:24:46.106 23:07:14 -- common/autotest_common.sh@950 -- # wait 4192100 00:24:46.106 23:07:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:46.106 23:07:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:46.106 23:07:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:46.106 23:07:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:46.106 23:07:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:46.106 23:07:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.106 23:07:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.106 23:07:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.648 23:07:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:48.648 23:07:16 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:24:48.648 23:07:16 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:49.659 23:07:17 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:52.202 23:07:19 -- target/perf_adq.sh@54 -- # sleep 5 00:24:57.493 23:07:24 -- target/perf_adq.sh@87 -- # nvmftestinit 00:24:57.493 23:07:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:57.493 23:07:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.493 23:07:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:57.493 23:07:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:57.493 23:07:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:57.493 23:07:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.493 23:07:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.493 23:07:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.493 23:07:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:57.493 23:07:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:57.493 23:07:24 -- common/autotest_common.sh@10 -- # set +x 00:24:57.493 23:07:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:57.493 23:07:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:57.493 23:07:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:57.493 23:07:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:57.493 23:07:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:57.493 23:07:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:57.493 23:07:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:57.493 23:07:24 -- nvmf/common.sh@294 -- # net_devs=() 00:24:57.493 23:07:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:57.493 23:07:24 -- nvmf/common.sh@295 -- # e810=() 00:24:57.493 23:07:24 -- nvmf/common.sh@295 -- # local -ga e810 00:24:57.493 23:07:24 -- nvmf/common.sh@296 -- # x722=() 00:24:57.493 23:07:24 -- nvmf/common.sh@296 -- # local -ga x722 00:24:57.493 23:07:24 -- nvmf/common.sh@297 -- # mlx=() 00:24:57.493 23:07:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:57.493 23:07:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.493 23:07:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.493 23:07:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.493 23:07:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.493 23:07:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.493 23:07:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.493 23:07:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.493 23:07:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.493 23:07:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.493 23:07:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.493 23:07:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.493 23:07:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:57.493 23:07:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:57.493 23:07:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:57.493 23:07:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:57.493 23:07:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:57.493 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:57.493 23:07:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:57.493 23:07:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:57.493 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:57.493 23:07:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:57.493 23:07:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:57.493 23:07:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.493 23:07:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:57.493 23:07:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.493 23:07:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:57.493 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:57.493 23:07:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.493 23:07:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:57.493 23:07:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.493 23:07:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:57.493 23:07:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.493 23:07:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:57.493 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:57.493 23:07:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.493 23:07:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:57.493 23:07:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:57.493 23:07:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:57.493 23:07:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:57.493 23:07:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.493 23:07:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.493 23:07:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.493 23:07:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:57.493 23:07:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.493 23:07:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.493 23:07:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:57.493 23:07:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.493 23:07:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.493 23:07:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:57.493 23:07:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:57.493 23:07:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.493 23:07:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.493 23:07:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.493 23:07:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.493 23:07:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:57.493 23:07:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.493 23:07:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.493 23:07:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.493 23:07:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:57.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:24:57.493 00:24:57.493 --- 10.0.0.2 ping statistics --- 00:24:57.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.493 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:24:57.493 23:07:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:24:57.493 00:24:57.493 --- 10.0.0.1 ping statistics --- 00:24:57.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.493 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:24:57.493 23:07:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.493 23:07:25 -- nvmf/common.sh@410 -- # return 0 00:24:57.493 23:07:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:57.493 23:07:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.493 23:07:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:57.493 23:07:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:57.493 23:07:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.493 23:07:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:57.493 23:07:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:57.493 23:07:25 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:24:57.493 23:07:25 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:57.494 23:07:25 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:57.494 23:07:25 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:57.494 net.core.busy_poll = 1 00:24:57.494 23:07:25 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:57.494 net.core.busy_read = 1 00:24:57.494 23:07:25 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:57.494 23:07:25 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:57.494 23:07:25 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:57.494 23:07:25 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:57.494 23:07:25 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:57.494 23:07:25 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:57.494 23:07:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:57.494 23:07:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:57.494 23:07:25 -- common/autotest_common.sh@10 -- # set +x 00:24:57.494 23:07:25 -- nvmf/common.sh@469 -- # nvmfpid=3641 00:24:57.494 23:07:25 -- nvmf/common.sh@470 -- # waitforlisten 3641 00:24:57.494 23:07:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:57.494 23:07:25 -- common/autotest_common.sh@819 -- # '[' -z 3641 ']' 00:24:57.494 23:07:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.494 23:07:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:57.494 23:07:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.494 23:07:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:57.494 23:07:25 -- common/autotest_common.sh@10 -- # set +x 00:24:57.494 [2024-06-09 23:07:25.500143] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:57.494 [2024-06-09 23:07:25.500223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.494 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.494 [2024-06-09 23:07:25.576878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:57.494 [2024-06-09 23:07:25.651302] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:57.494 [2024-06-09 23:07:25.651445] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.494 [2024-06-09 23:07:25.651456] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.494 [2024-06-09 23:07:25.651465] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.494 [2024-06-09 23:07:25.651617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.494 [2024-06-09 23:07:25.651735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.494 [2024-06-09 23:07:25.651894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.494 [2024-06-09 23:07:25.651894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.438 23:07:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:58.438 23:07:26 -- common/autotest_common.sh@852 -- # return 0 00:24:58.438 23:07:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:58.438 23:07:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:58.438 23:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:58.438 23:07:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.438 23:07:26 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:24:58.438 23:07:26 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:58.438 23:07:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:58.438 23:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:58.438 23:07:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:58.438 23:07:26 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:58.438 23:07:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:58.438 23:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:58.438 23:07:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:58.438 23:07:26 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:58.438 23:07:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:58.438 23:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:58.438 [2024-06-09 23:07:26.381631] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.438 23:07:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:58.438 23:07:26 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:58.438 23:07:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:58.438 23:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:58.438 Malloc1 00:24:58.438 23:07:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:58.438 23:07:26 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.438 23:07:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:58.438 23:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:58.438 23:07:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:58.438 23:07:26 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:58.438 23:07:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:58.438 23:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:58.438 23:07:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:58.438 23:07:26 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.438 23:07:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:58.438 23:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:58.438 [2024-06-09 23:07:26.436987] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.438 23:07:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:58.438 23:07:26 -- target/perf_adq.sh@94 -- # perfpid=4058 00:24:58.438 23:07:26 -- target/perf_adq.sh@95 -- # sleep 2 00:24:58.438 23:07:26 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:58.438 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.354 23:07:28 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:25:00.354 23:07:28 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:00.354 23:07:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:00.354 23:07:28 -- target/perf_adq.sh@97 -- # wc -l 00:25:00.354 23:07:28 -- common/autotest_common.sh@10 -- # set +x 00:25:00.354 23:07:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:00.354 23:07:28 -- target/perf_adq.sh@97 -- # count=2 00:25:00.354 23:07:28 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:25:00.354 23:07:28 -- target/perf_adq.sh@103 -- # wait 4058 00:25:08.494 Initializing NVMe Controllers 00:25:08.494 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:08.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:08.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:08.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:08.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:08.494 Initialization complete. Launching workers. 00:25:08.494 ======================================================== 00:25:08.494 Latency(us) 00:25:08.494 Device Information : IOPS MiB/s Average min max 00:25:08.494 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7869.40 30.74 8133.77 1371.61 52565.38 00:25:08.494 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12371.50 48.33 5174.39 1816.62 10128.93 00:25:08.494 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7848.20 30.66 8175.42 1575.59 54562.68 00:25:08.494 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7924.30 30.95 8110.86 1336.83 52856.07 00:25:08.494 ======================================================== 00:25:08.494 Total : 36013.40 140.68 7121.19 1336.83 54562.68 00:25:08.494 00:25:08.494 23:07:36 -- target/perf_adq.sh@104 -- # nvmftestfini 00:25:08.494 23:07:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:08.494 23:07:36 -- nvmf/common.sh@116 -- # sync 00:25:08.494 23:07:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:08.494 23:07:36 -- nvmf/common.sh@119 -- # set +e 00:25:08.494 23:07:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:08.494 23:07:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:08.494 rmmod nvme_tcp 00:25:08.494 rmmod nvme_fabrics 00:25:08.494 rmmod nvme_keyring 00:25:08.494 23:07:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:08.494 23:07:36 -- nvmf/common.sh@123 -- # set -e 00:25:08.494 23:07:36 -- nvmf/common.sh@124 -- # return 0 00:25:08.494 23:07:36 -- nvmf/common.sh@477 -- # '[' -n 3641 ']' 00:25:08.494 23:07:36 -- nvmf/common.sh@478 -- # killprocess 3641 00:25:08.494 23:07:36 -- common/autotest_common.sh@926 -- # '[' -z 3641 ']' 00:25:08.494 23:07:36 -- common/autotest_common.sh@930 -- # kill -0 3641 00:25:08.494 23:07:36 -- common/autotest_common.sh@931 -- # uname 00:25:08.754 23:07:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:08.754 23:07:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3641 00:25:08.754 23:07:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:08.754 23:07:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:08.754 23:07:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3641' 00:25:08.755 killing process with pid 3641 00:25:08.755 23:07:36 -- common/autotest_common.sh@945 -- # kill 3641 00:25:08.755 23:07:36 -- common/autotest_common.sh@950 -- # wait 3641 00:25:08.755 23:07:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:08.755 23:07:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:08.755 23:07:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:08.755 23:07:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:08.755 23:07:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:08.755 23:07:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.755 23:07:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.755 23:07:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.057 23:07:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:12.057 23:07:39 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:25:12.057 00:25:12.057 real 0m53.015s 00:25:12.057 user 2m48.626s 00:25:12.057 sys 0m10.750s 00:25:12.057 23:07:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:12.057 23:07:39 -- common/autotest_common.sh@10 -- # set +x 00:25:12.057 ************************************ 00:25:12.057 END TEST nvmf_perf_adq 00:25:12.057 ************************************ 00:25:12.058 23:07:39 -- nvmf/nvmf.sh@80 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:12.058 23:07:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:12.058 23:07:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:12.058 23:07:39 -- common/autotest_common.sh@10 -- # set +x 00:25:12.058 ************************************ 00:25:12.058 START TEST nvmf_shutdown 00:25:12.058 ************************************ 00:25:12.058 23:07:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:12.058 * Looking for test storage... 00:25:12.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:12.058 23:07:40 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.058 23:07:40 -- nvmf/common.sh@7 -- # uname -s 00:25:12.058 23:07:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.058 23:07:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.058 23:07:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.058 23:07:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.058 23:07:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.058 23:07:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.058 23:07:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.058 23:07:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.058 23:07:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.058 23:07:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.058 23:07:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:12.058 23:07:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:12.058 23:07:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.058 23:07:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.058 23:07:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.058 23:07:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.058 23:07:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.058 23:07:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.058 23:07:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.058 23:07:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.058 23:07:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.058 23:07:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.058 23:07:40 -- paths/export.sh@5 -- # export PATH 00:25:12.058 23:07:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.058 23:07:40 -- nvmf/common.sh@46 -- # : 0 00:25:12.058 23:07:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:12.058 23:07:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:12.058 23:07:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:12.058 23:07:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.058 23:07:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.058 23:07:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:12.058 23:07:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:12.058 23:07:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:12.058 23:07:40 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:12.058 23:07:40 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:12.058 23:07:40 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:12.058 23:07:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:12.058 23:07:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:12.058 23:07:40 -- common/autotest_common.sh@10 -- # set +x 00:25:12.058 ************************************ 00:25:12.058 START TEST nvmf_shutdown_tc1 00:25:12.058 ************************************ 00:25:12.058 23:07:40 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:25:12.058 23:07:40 -- target/shutdown.sh@74 -- # starttarget 00:25:12.058 23:07:40 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:12.058 23:07:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:12.058 23:07:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.058 23:07:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:12.058 23:07:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:12.058 23:07:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:12.058 23:07:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.058 23:07:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.058 23:07:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.058 23:07:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:12.058 23:07:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:12.058 23:07:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:12.058 23:07:40 -- common/autotest_common.sh@10 -- # set +x 00:25:20.205 23:07:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:20.205 23:07:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:20.205 23:07:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:20.205 23:07:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:20.205 23:07:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:20.205 23:07:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:20.205 23:07:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:20.205 23:07:46 -- nvmf/common.sh@294 -- # net_devs=() 00:25:20.205 23:07:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:20.205 23:07:46 -- nvmf/common.sh@295 -- # e810=() 00:25:20.205 23:07:46 -- nvmf/common.sh@295 -- # local -ga e810 00:25:20.205 23:07:46 -- nvmf/common.sh@296 -- # x722=() 00:25:20.205 23:07:46 -- nvmf/common.sh@296 -- # local -ga x722 00:25:20.205 23:07:46 -- nvmf/common.sh@297 -- # mlx=() 00:25:20.205 23:07:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:20.205 23:07:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.205 23:07:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.205 23:07:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.205 23:07:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.205 23:07:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.205 23:07:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.205 23:07:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.205 23:07:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.205 23:07:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.205 23:07:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.205 23:07:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.205 23:07:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:20.205 23:07:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:20.205 23:07:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:20.205 23:07:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:20.205 23:07:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:20.205 23:07:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:20.205 23:07:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:20.205 23:07:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:20.205 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:20.205 23:07:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:20.205 23:07:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:20.205 23:07:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.205 23:07:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.205 23:07:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:20.205 23:07:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:20.205 23:07:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:20.205 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:20.205 23:07:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:20.205 23:07:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:20.205 23:07:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.205 23:07:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.205 23:07:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:20.205 23:07:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:20.205 23:07:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:20.205 23:07:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:20.205 23:07:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:20.205 23:07:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.205 23:07:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:20.205 23:07:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.206 23:07:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:20.206 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:20.206 23:07:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.206 23:07:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:20.206 23:07:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.206 23:07:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:20.206 23:07:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.206 23:07:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:20.206 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:20.206 23:07:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.206 23:07:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:20.206 23:07:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:20.206 23:07:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:20.206 23:07:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:20.206 23:07:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:20.206 23:07:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.206 23:07:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.206 23:07:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.206 23:07:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:20.206 23:07:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.206 23:07:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.206 23:07:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:20.206 23:07:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.206 23:07:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.206 23:07:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:20.206 23:07:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:20.206 23:07:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.206 23:07:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.206 23:07:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.206 23:07:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.206 23:07:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:20.206 23:07:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.206 23:07:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.206 23:07:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.206 23:07:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:20.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.534 ms 00:25:20.206 00:25:20.206 --- 10.0.0.2 ping statistics --- 00:25:20.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.206 rtt min/avg/max/mdev = 0.534/0.534/0.534/0.000 ms 00:25:20.206 23:07:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:25:20.206 00:25:20.206 --- 10.0.0.1 ping statistics --- 00:25:20.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.206 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:25:20.206 23:07:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.206 23:07:47 -- nvmf/common.sh@410 -- # return 0 00:25:20.206 23:07:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:20.206 23:07:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.206 23:07:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:20.206 23:07:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:20.206 23:07:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.206 23:07:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:20.206 23:07:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:20.206 23:07:47 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:20.206 23:07:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:20.206 23:07:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:20.206 23:07:47 -- common/autotest_common.sh@10 -- # set +x 00:25:20.206 23:07:47 -- nvmf/common.sh@469 -- # nvmfpid=10506 00:25:20.206 23:07:47 -- nvmf/common.sh@470 -- # waitforlisten 10506 00:25:20.206 23:07:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:20.206 23:07:47 -- common/autotest_common.sh@819 -- # '[' -z 10506 ']' 00:25:20.206 23:07:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.206 23:07:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:20.206 23:07:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.206 23:07:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:20.206 23:07:47 -- common/autotest_common.sh@10 -- # set +x 00:25:20.206 [2024-06-09 23:07:47.407309] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:20.206 [2024-06-09 23:07:47.407372] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.206 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.206 [2024-06-09 23:07:47.480131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:20.206 [2024-06-09 23:07:47.552074] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:20.206 [2024-06-09 23:07:47.552223] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.206 [2024-06-09 23:07:47.552234] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.206 [2024-06-09 23:07:47.552248] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.206 [2024-06-09 23:07:47.552362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.206 [2024-06-09 23:07:47.552518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.206 [2024-06-09 23:07:47.552684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.206 [2024-06-09 23:07:47.552685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:20.206 23:07:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:20.206 23:07:48 -- common/autotest_common.sh@852 -- # return 0 00:25:20.206 23:07:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:20.206 23:07:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:20.206 23:07:48 -- common/autotest_common.sh@10 -- # set +x 00:25:20.206 23:07:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.206 23:07:48 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:20.206 23:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.206 23:07:48 -- common/autotest_common.sh@10 -- # set +x 00:25:20.206 [2024-06-09 23:07:48.232616] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.206 23:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.206 23:07:48 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:20.206 23:07:48 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:20.206 23:07:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:20.206 23:07:48 -- common/autotest_common.sh@10 -- # set +x 00:25:20.206 23:07:48 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:20.206 23:07:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.206 23:07:48 -- target/shutdown.sh@28 -- # cat 00:25:20.206 23:07:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.206 23:07:48 -- target/shutdown.sh@28 -- # cat 00:25:20.206 23:07:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.206 23:07:48 -- target/shutdown.sh@28 -- # cat 00:25:20.206 23:07:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.206 23:07:48 -- target/shutdown.sh@28 -- # cat 00:25:20.206 23:07:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.206 23:07:48 -- target/shutdown.sh@28 -- # cat 00:25:20.206 23:07:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.206 23:07:48 -- target/shutdown.sh@28 -- # cat 00:25:20.206 23:07:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.206 23:07:48 -- target/shutdown.sh@28 -- # cat 00:25:20.206 23:07:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.206 23:07:48 -- target/shutdown.sh@28 -- # cat 00:25:20.206 23:07:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.206 23:07:48 -- target/shutdown.sh@28 -- # cat 00:25:20.206 23:07:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:20.206 23:07:48 -- target/shutdown.sh@28 -- # cat 00:25:20.206 23:07:48 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:20.206 23:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.206 23:07:48 -- common/autotest_common.sh@10 -- # set +x 00:25:20.206 Malloc1 00:25:20.206 [2024-06-09 23:07:48.336137] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.206 Malloc2 00:25:20.468 Malloc3 00:25:20.468 Malloc4 00:25:20.468 Malloc5 00:25:20.468 Malloc6 00:25:20.468 Malloc7 00:25:20.468 Malloc8 00:25:20.468 Malloc9 00:25:20.731 Malloc10 00:25:20.731 23:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.731 23:07:48 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:20.731 23:07:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:20.731 23:07:48 -- common/autotest_common.sh@10 -- # set +x 00:25:20.731 23:07:48 -- target/shutdown.sh@78 -- # perfpid=10726 00:25:20.731 23:07:48 -- target/shutdown.sh@79 -- # waitforlisten 10726 /var/tmp/bdevperf.sock 00:25:20.731 23:07:48 -- common/autotest_common.sh@819 -- # '[' -z 10726 ']' 00:25:20.731 23:07:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:20.731 23:07:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:20.731 23:07:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:20.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:20.731 23:07:48 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:20.731 23:07:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:20.731 23:07:48 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:20.731 23:07:48 -- common/autotest_common.sh@10 -- # set +x 00:25:20.731 23:07:48 -- nvmf/common.sh@520 -- # config=() 00:25:20.731 23:07:48 -- nvmf/common.sh@520 -- # local subsystem config 00:25:20.731 23:07:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:20.731 23:07:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:20.731 { 00:25:20.731 "params": { 00:25:20.731 "name": "Nvme$subsystem", 00:25:20.731 "trtype": "$TEST_TRANSPORT", 00:25:20.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:20.731 "adrfam": "ipv4", 00:25:20.731 "trsvcid": "$NVMF_PORT", 00:25:20.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:20.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:20.731 "hdgst": ${hdgst:-false}, 00:25:20.731 "ddgst": ${ddgst:-false} 00:25:20.731 }, 00:25:20.731 "method": "bdev_nvme_attach_controller" 00:25:20.731 } 00:25:20.731 EOF 00:25:20.731 )") 00:25:20.731 23:07:48 -- nvmf/common.sh@542 -- # cat 00:25:20.731 23:07:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:20.731 23:07:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:20.731 { 00:25:20.731 "params": { 00:25:20.731 "name": "Nvme$subsystem", 00:25:20.731 "trtype": "$TEST_TRANSPORT", 00:25:20.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:20.731 "adrfam": "ipv4", 00:25:20.731 "trsvcid": "$NVMF_PORT", 00:25:20.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:20.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:20.731 "hdgst": ${hdgst:-false}, 00:25:20.731 "ddgst": ${ddgst:-false} 00:25:20.731 }, 00:25:20.731 "method": "bdev_nvme_attach_controller" 00:25:20.731 } 00:25:20.731 EOF 00:25:20.731 )") 00:25:20.731 23:07:48 -- nvmf/common.sh@542 -- # cat 00:25:20.731 23:07:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:20.731 23:07:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:20.731 { 00:25:20.731 "params": { 00:25:20.731 "name": "Nvme$subsystem", 00:25:20.731 "trtype": "$TEST_TRANSPORT", 00:25:20.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:20.731 "adrfam": "ipv4", 00:25:20.731 "trsvcid": "$NVMF_PORT", 00:25:20.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:20.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:20.731 "hdgst": ${hdgst:-false}, 00:25:20.731 "ddgst": ${ddgst:-false} 00:25:20.731 }, 00:25:20.731 "method": "bdev_nvme_attach_controller" 00:25:20.731 } 00:25:20.731 EOF 00:25:20.731 )") 00:25:20.731 23:07:48 -- nvmf/common.sh@542 -- # cat 00:25:20.731 23:07:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:20.731 23:07:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:20.731 { 00:25:20.731 "params": { 00:25:20.731 "name": "Nvme$subsystem", 00:25:20.731 "trtype": "$TEST_TRANSPORT", 00:25:20.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:20.731 "adrfam": "ipv4", 00:25:20.731 "trsvcid": "$NVMF_PORT", 00:25:20.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:20.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:20.731 "hdgst": ${hdgst:-false}, 00:25:20.731 "ddgst": ${ddgst:-false} 00:25:20.731 }, 00:25:20.731 "method": "bdev_nvme_attach_controller" 00:25:20.731 } 00:25:20.731 EOF 00:25:20.731 )") 00:25:20.731 23:07:48 -- nvmf/common.sh@542 -- # cat 00:25:20.731 23:07:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:20.731 23:07:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:20.731 { 00:25:20.731 "params": { 00:25:20.731 "name": "Nvme$subsystem", 00:25:20.731 "trtype": "$TEST_TRANSPORT", 00:25:20.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:20.731 "adrfam": "ipv4", 00:25:20.731 "trsvcid": "$NVMF_PORT", 00:25:20.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:20.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:20.731 "hdgst": ${hdgst:-false}, 00:25:20.731 "ddgst": ${ddgst:-false} 00:25:20.731 }, 00:25:20.731 "method": "bdev_nvme_attach_controller" 00:25:20.731 } 00:25:20.731 EOF 00:25:20.731 )") 00:25:20.731 23:07:48 -- nvmf/common.sh@542 -- # cat 00:25:20.731 23:07:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:20.731 23:07:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:20.731 { 00:25:20.731 "params": { 00:25:20.731 "name": "Nvme$subsystem", 00:25:20.731 "trtype": "$TEST_TRANSPORT", 00:25:20.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:20.731 "adrfam": "ipv4", 00:25:20.731 "trsvcid": "$NVMF_PORT", 00:25:20.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:20.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:20.731 "hdgst": ${hdgst:-false}, 00:25:20.731 "ddgst": ${ddgst:-false} 00:25:20.731 }, 00:25:20.731 "method": "bdev_nvme_attach_controller" 00:25:20.731 } 00:25:20.731 EOF 00:25:20.731 )") 00:25:20.731 23:07:48 -- nvmf/common.sh@542 -- # cat 00:25:20.731 23:07:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:20.731 23:07:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:20.731 { 00:25:20.731 "params": { 00:25:20.731 "name": "Nvme$subsystem", 00:25:20.731 "trtype": "$TEST_TRANSPORT", 00:25:20.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:20.731 "adrfam": "ipv4", 00:25:20.731 "trsvcid": "$NVMF_PORT", 00:25:20.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:20.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:20.732 "hdgst": ${hdgst:-false}, 00:25:20.732 "ddgst": ${ddgst:-false} 00:25:20.732 }, 00:25:20.732 "method": "bdev_nvme_attach_controller" 00:25:20.732 } 00:25:20.732 EOF 00:25:20.732 )") 00:25:20.732 [2024-06-09 23:07:48.789125] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:20.732 [2024-06-09 23:07:48.789191] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:20.732 23:07:48 -- nvmf/common.sh@542 -- # cat 00:25:20.732 23:07:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:20.732 23:07:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:20.732 { 00:25:20.732 "params": { 00:25:20.732 "name": "Nvme$subsystem", 00:25:20.732 "trtype": "$TEST_TRANSPORT", 00:25:20.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:20.732 "adrfam": "ipv4", 00:25:20.732 "trsvcid": "$NVMF_PORT", 00:25:20.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:20.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:20.732 "hdgst": ${hdgst:-false}, 00:25:20.732 "ddgst": ${ddgst:-false} 00:25:20.732 }, 00:25:20.732 "method": "bdev_nvme_attach_controller" 00:25:20.732 } 00:25:20.732 EOF 00:25:20.732 )") 00:25:20.732 23:07:48 -- nvmf/common.sh@542 -- # cat 00:25:20.732 23:07:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:20.732 23:07:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:20.732 { 00:25:20.732 "params": { 00:25:20.732 "name": "Nvme$subsystem", 00:25:20.732 "trtype": "$TEST_TRANSPORT", 00:25:20.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:20.732 "adrfam": "ipv4", 00:25:20.732 "trsvcid": "$NVMF_PORT", 00:25:20.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:20.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:20.732 "hdgst": ${hdgst:-false}, 00:25:20.732 "ddgst": ${ddgst:-false} 00:25:20.732 }, 00:25:20.732 "method": "bdev_nvme_attach_controller" 00:25:20.732 } 00:25:20.732 EOF 00:25:20.732 )") 00:25:20.732 23:07:48 -- nvmf/common.sh@542 -- # cat 00:25:20.732 23:07:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:20.732 23:07:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:20.732 { 00:25:20.732 "params": { 00:25:20.732 "name": "Nvme$subsystem", 00:25:20.732 "trtype": "$TEST_TRANSPORT", 00:25:20.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:20.732 "adrfam": "ipv4", 00:25:20.732 "trsvcid": "$NVMF_PORT", 00:25:20.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:20.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:20.732 "hdgst": ${hdgst:-false}, 00:25:20.732 "ddgst": ${ddgst:-false} 00:25:20.732 }, 00:25:20.732 "method": "bdev_nvme_attach_controller" 00:25:20.732 } 00:25:20.732 EOF 00:25:20.732 )") 00:25:20.732 23:07:48 -- nvmf/common.sh@542 -- # cat 00:25:20.732 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.732 23:07:48 -- nvmf/common.sh@544 -- # jq . 00:25:20.732 23:07:48 -- nvmf/common.sh@545 -- # IFS=, 00:25:20.732 23:07:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:20.732 "params": { 00:25:20.732 "name": "Nvme1", 00:25:20.732 "trtype": "tcp", 00:25:20.732 "traddr": "10.0.0.2", 00:25:20.732 "adrfam": "ipv4", 00:25:20.732 "trsvcid": "4420", 00:25:20.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:20.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:20.732 "hdgst": false, 00:25:20.732 "ddgst": false 00:25:20.732 }, 00:25:20.732 "method": "bdev_nvme_attach_controller" 00:25:20.732 },{ 00:25:20.732 "params": { 00:25:20.732 "name": "Nvme2", 00:25:20.732 "trtype": "tcp", 00:25:20.732 "traddr": "10.0.0.2", 00:25:20.732 "adrfam": "ipv4", 00:25:20.732 "trsvcid": "4420", 00:25:20.732 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:20.732 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:20.732 "hdgst": false, 00:25:20.732 "ddgst": false 00:25:20.732 }, 00:25:20.732 "method": "bdev_nvme_attach_controller" 00:25:20.732 },{ 00:25:20.732 "params": { 00:25:20.732 "name": "Nvme3", 00:25:20.732 "trtype": "tcp", 00:25:20.732 "traddr": "10.0.0.2", 00:25:20.732 "adrfam": "ipv4", 00:25:20.732 "trsvcid": "4420", 00:25:20.732 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:20.732 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:20.732 "hdgst": false, 00:25:20.732 "ddgst": false 00:25:20.732 }, 00:25:20.732 "method": "bdev_nvme_attach_controller" 00:25:20.732 },{ 00:25:20.732 "params": { 00:25:20.732 "name": "Nvme4", 00:25:20.732 "trtype": "tcp", 00:25:20.732 "traddr": "10.0.0.2", 00:25:20.732 "adrfam": "ipv4", 00:25:20.732 "trsvcid": "4420", 00:25:20.732 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:20.732 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:20.732 "hdgst": false, 00:25:20.732 "ddgst": false 00:25:20.732 }, 00:25:20.732 "method": "bdev_nvme_attach_controller" 00:25:20.732 },{ 00:25:20.732 "params": { 00:25:20.732 "name": "Nvme5", 00:25:20.732 "trtype": "tcp", 00:25:20.732 "traddr": "10.0.0.2", 00:25:20.732 "adrfam": "ipv4", 00:25:20.732 "trsvcid": "4420", 00:25:20.732 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:20.732 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:20.732 "hdgst": false, 00:25:20.732 "ddgst": false 00:25:20.732 }, 00:25:20.732 "method": "bdev_nvme_attach_controller" 00:25:20.732 },{ 00:25:20.732 "params": { 00:25:20.732 "name": "Nvme6", 00:25:20.732 "trtype": "tcp", 00:25:20.732 "traddr": "10.0.0.2", 00:25:20.732 "adrfam": "ipv4", 00:25:20.732 "trsvcid": "4420", 00:25:20.732 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:20.732 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:20.732 "hdgst": false, 00:25:20.732 "ddgst": false 00:25:20.732 }, 00:25:20.732 "method": "bdev_nvme_attach_controller" 00:25:20.732 },{ 00:25:20.732 "params": { 00:25:20.732 "name": "Nvme7", 00:25:20.732 "trtype": "tcp", 00:25:20.732 "traddr": "10.0.0.2", 00:25:20.732 "adrfam": "ipv4", 00:25:20.732 "trsvcid": "4420", 00:25:20.732 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:20.732 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:20.732 "hdgst": false, 00:25:20.732 "ddgst": false 00:25:20.732 }, 00:25:20.732 "method": "bdev_nvme_attach_controller" 00:25:20.732 },{ 00:25:20.732 "params": { 00:25:20.732 "name": "Nvme8", 00:25:20.732 "trtype": "tcp", 00:25:20.732 "traddr": "10.0.0.2", 00:25:20.732 "adrfam": "ipv4", 00:25:20.732 "trsvcid": "4420", 00:25:20.732 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:20.732 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:20.732 "hdgst": false, 00:25:20.732 "ddgst": false 00:25:20.732 }, 00:25:20.732 "method": "bdev_nvme_attach_controller" 00:25:20.732 },{ 00:25:20.732 "params": { 00:25:20.732 "name": "Nvme9", 00:25:20.732 "trtype": "tcp", 00:25:20.732 "traddr": "10.0.0.2", 00:25:20.732 "adrfam": "ipv4", 00:25:20.732 "trsvcid": "4420", 00:25:20.732 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:20.732 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:20.732 "hdgst": false, 00:25:20.732 "ddgst": false 00:25:20.732 }, 00:25:20.732 "method": "bdev_nvme_attach_controller" 00:25:20.732 },{ 00:25:20.732 "params": { 00:25:20.732 "name": "Nvme10", 00:25:20.732 "trtype": "tcp", 00:25:20.732 "traddr": "10.0.0.2", 00:25:20.732 "adrfam": "ipv4", 00:25:20.732 "trsvcid": "4420", 00:25:20.732 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:20.732 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:20.732 "hdgst": false, 00:25:20.732 "ddgst": false 00:25:20.732 }, 00:25:20.732 "method": "bdev_nvme_attach_controller" 00:25:20.732 }' 00:25:20.732 [2024-06-09 23:07:48.851376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.994 [2024-06-09 23:07:48.914262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.382 23:07:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:22.382 23:07:50 -- common/autotest_common.sh@852 -- # return 0 00:25:22.382 23:07:50 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:22.382 23:07:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.382 23:07:50 -- common/autotest_common.sh@10 -- # set +x 00:25:22.382 23:07:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.382 23:07:50 -- target/shutdown.sh@83 -- # kill -9 10726 00:25:22.382 23:07:50 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:25:22.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 10726 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:22.382 23:07:50 -- target/shutdown.sh@87 -- # sleep 1 00:25:23.326 23:07:51 -- target/shutdown.sh@88 -- # kill -0 10506 00:25:23.326 23:07:51 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:23.326 23:07:51 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:23.326 23:07:51 -- nvmf/common.sh@520 -- # config=() 00:25:23.326 23:07:51 -- nvmf/common.sh@520 -- # local subsystem config 00:25:23.326 23:07:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:23.326 23:07:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:23.326 { 00:25:23.326 "params": { 00:25:23.326 "name": "Nvme$subsystem", 00:25:23.326 "trtype": "$TEST_TRANSPORT", 00:25:23.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.326 "adrfam": "ipv4", 00:25:23.326 "trsvcid": "$NVMF_PORT", 00:25:23.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.326 "hdgst": ${hdgst:-false}, 00:25:23.326 "ddgst": ${ddgst:-false} 00:25:23.326 }, 00:25:23.326 "method": "bdev_nvme_attach_controller" 00:25:23.326 } 00:25:23.326 EOF 00:25:23.326 )") 00:25:23.326 23:07:51 -- nvmf/common.sh@542 -- # cat 00:25:23.326 23:07:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:23.326 23:07:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:23.326 { 00:25:23.326 "params": { 00:25:23.326 "name": "Nvme$subsystem", 00:25:23.326 "trtype": "$TEST_TRANSPORT", 00:25:23.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.326 "adrfam": "ipv4", 00:25:23.326 "trsvcid": "$NVMF_PORT", 00:25:23.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.326 "hdgst": ${hdgst:-false}, 00:25:23.326 "ddgst": ${ddgst:-false} 00:25:23.326 }, 00:25:23.326 "method": "bdev_nvme_attach_controller" 00:25:23.326 } 00:25:23.326 EOF 00:25:23.326 )") 00:25:23.326 23:07:51 -- nvmf/common.sh@542 -- # cat 00:25:23.326 23:07:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:23.326 23:07:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:23.326 { 00:25:23.326 "params": { 00:25:23.326 "name": "Nvme$subsystem", 00:25:23.326 "trtype": "$TEST_TRANSPORT", 00:25:23.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.326 "adrfam": "ipv4", 00:25:23.326 "trsvcid": "$NVMF_PORT", 00:25:23.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.326 "hdgst": ${hdgst:-false}, 00:25:23.326 "ddgst": ${ddgst:-false} 00:25:23.326 }, 00:25:23.326 "method": "bdev_nvme_attach_controller" 00:25:23.326 } 00:25:23.326 EOF 00:25:23.326 )") 00:25:23.326 23:07:51 -- nvmf/common.sh@542 -- # cat 00:25:23.326 23:07:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:23.326 23:07:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:23.326 { 00:25:23.326 "params": { 00:25:23.326 "name": "Nvme$subsystem", 00:25:23.326 "trtype": "$TEST_TRANSPORT", 00:25:23.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.326 "adrfam": "ipv4", 00:25:23.326 "trsvcid": "$NVMF_PORT", 00:25:23.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.326 "hdgst": ${hdgst:-false}, 00:25:23.326 "ddgst": ${ddgst:-false} 00:25:23.326 }, 00:25:23.326 "method": "bdev_nvme_attach_controller" 00:25:23.326 } 00:25:23.326 EOF 00:25:23.326 )") 00:25:23.326 23:07:51 -- nvmf/common.sh@542 -- # cat 00:25:23.326 23:07:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:23.326 23:07:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:23.326 { 00:25:23.326 "params": { 00:25:23.326 "name": "Nvme$subsystem", 00:25:23.326 "trtype": "$TEST_TRANSPORT", 00:25:23.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.326 "adrfam": "ipv4", 00:25:23.326 "trsvcid": "$NVMF_PORT", 00:25:23.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.326 "hdgst": ${hdgst:-false}, 00:25:23.326 "ddgst": ${ddgst:-false} 00:25:23.326 }, 00:25:23.326 "method": "bdev_nvme_attach_controller" 00:25:23.326 } 00:25:23.326 EOF 00:25:23.326 )") 00:25:23.326 23:07:51 -- nvmf/common.sh@542 -- # cat 00:25:23.326 23:07:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:23.326 23:07:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:23.326 { 00:25:23.326 "params": { 00:25:23.326 "name": "Nvme$subsystem", 00:25:23.326 "trtype": "$TEST_TRANSPORT", 00:25:23.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.326 "adrfam": "ipv4", 00:25:23.326 "trsvcid": "$NVMF_PORT", 00:25:23.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.326 "hdgst": ${hdgst:-false}, 00:25:23.326 "ddgst": ${ddgst:-false} 00:25:23.326 }, 00:25:23.326 "method": "bdev_nvme_attach_controller" 00:25:23.326 } 00:25:23.326 EOF 00:25:23.326 )") 00:25:23.326 23:07:51 -- nvmf/common.sh@542 -- # cat 00:25:23.326 23:07:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:23.326 23:07:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:23.326 { 00:25:23.326 "params": { 00:25:23.326 "name": "Nvme$subsystem", 00:25:23.326 "trtype": "$TEST_TRANSPORT", 00:25:23.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.326 "adrfam": "ipv4", 00:25:23.326 "trsvcid": "$NVMF_PORT", 00:25:23.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.326 "hdgst": ${hdgst:-false}, 00:25:23.327 "ddgst": ${ddgst:-false} 00:25:23.327 }, 00:25:23.327 "method": "bdev_nvme_attach_controller" 00:25:23.327 } 00:25:23.327 EOF 00:25:23.327 )") 00:25:23.327 [2024-06-09 23:07:51.259793] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:23.327 [2024-06-09 23:07:51.259848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid11331 ] 00:25:23.327 23:07:51 -- nvmf/common.sh@542 -- # cat 00:25:23.327 23:07:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:23.327 23:07:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:23.327 { 00:25:23.327 "params": { 00:25:23.327 "name": "Nvme$subsystem", 00:25:23.327 "trtype": "$TEST_TRANSPORT", 00:25:23.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.327 "adrfam": "ipv4", 00:25:23.327 "trsvcid": "$NVMF_PORT", 00:25:23.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.327 "hdgst": ${hdgst:-false}, 00:25:23.327 "ddgst": ${ddgst:-false} 00:25:23.327 }, 00:25:23.327 "method": "bdev_nvme_attach_controller" 00:25:23.327 } 00:25:23.327 EOF 00:25:23.327 )") 00:25:23.327 23:07:51 -- nvmf/common.sh@542 -- # cat 00:25:23.327 23:07:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:23.327 23:07:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:23.327 { 00:25:23.327 "params": { 00:25:23.327 "name": "Nvme$subsystem", 00:25:23.327 "trtype": "$TEST_TRANSPORT", 00:25:23.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.327 "adrfam": "ipv4", 00:25:23.327 "trsvcid": "$NVMF_PORT", 00:25:23.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.327 "hdgst": ${hdgst:-false}, 00:25:23.327 "ddgst": ${ddgst:-false} 00:25:23.327 }, 00:25:23.327 "method": "bdev_nvme_attach_controller" 00:25:23.327 } 00:25:23.327 EOF 00:25:23.327 )") 00:25:23.327 23:07:51 -- nvmf/common.sh@542 -- # cat 00:25:23.327 23:07:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:23.327 23:07:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:23.327 { 00:25:23.327 "params": { 00:25:23.327 "name": "Nvme$subsystem", 00:25:23.327 "trtype": "$TEST_TRANSPORT", 00:25:23.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:23.327 "adrfam": "ipv4", 00:25:23.327 "trsvcid": "$NVMF_PORT", 00:25:23.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:23.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:23.327 "hdgst": ${hdgst:-false}, 00:25:23.327 "ddgst": ${ddgst:-false} 00:25:23.327 }, 00:25:23.327 "method": "bdev_nvme_attach_controller" 00:25:23.327 } 00:25:23.327 EOF 00:25:23.327 )") 00:25:23.327 23:07:51 -- nvmf/common.sh@542 -- # cat 00:25:23.327 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.327 23:07:51 -- nvmf/common.sh@544 -- # jq . 00:25:23.327 23:07:51 -- nvmf/common.sh@545 -- # IFS=, 00:25:23.327 23:07:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:23.327 "params": { 00:25:23.327 "name": "Nvme1", 00:25:23.327 "trtype": "tcp", 00:25:23.327 "traddr": "10.0.0.2", 00:25:23.327 "adrfam": "ipv4", 00:25:23.327 "trsvcid": "4420", 00:25:23.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:23.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:23.327 "hdgst": false, 00:25:23.327 "ddgst": false 00:25:23.327 }, 00:25:23.327 "method": "bdev_nvme_attach_controller" 00:25:23.327 },{ 00:25:23.327 "params": { 00:25:23.327 "name": "Nvme2", 00:25:23.327 "trtype": "tcp", 00:25:23.327 "traddr": "10.0.0.2", 00:25:23.327 "adrfam": "ipv4", 00:25:23.327 "trsvcid": "4420", 00:25:23.327 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:23.327 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:23.327 "hdgst": false, 00:25:23.327 "ddgst": false 00:25:23.327 }, 00:25:23.327 "method": "bdev_nvme_attach_controller" 00:25:23.327 },{ 00:25:23.327 "params": { 00:25:23.327 "name": "Nvme3", 00:25:23.327 "trtype": "tcp", 00:25:23.327 "traddr": "10.0.0.2", 00:25:23.327 "adrfam": "ipv4", 00:25:23.327 "trsvcid": "4420", 00:25:23.327 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:23.327 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:23.327 "hdgst": false, 00:25:23.327 "ddgst": false 00:25:23.327 }, 00:25:23.327 "method": "bdev_nvme_attach_controller" 00:25:23.327 },{ 00:25:23.327 "params": { 00:25:23.327 "name": "Nvme4", 00:25:23.327 "trtype": "tcp", 00:25:23.327 "traddr": "10.0.0.2", 00:25:23.327 "adrfam": "ipv4", 00:25:23.327 "trsvcid": "4420", 00:25:23.327 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:23.327 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:23.327 "hdgst": false, 00:25:23.327 "ddgst": false 00:25:23.327 }, 00:25:23.327 "method": "bdev_nvme_attach_controller" 00:25:23.327 },{ 00:25:23.327 "params": { 00:25:23.327 "name": "Nvme5", 00:25:23.327 "trtype": "tcp", 00:25:23.327 "traddr": "10.0.0.2", 00:25:23.327 "adrfam": "ipv4", 00:25:23.327 "trsvcid": "4420", 00:25:23.327 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:23.327 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:23.327 "hdgst": false, 00:25:23.327 "ddgst": false 00:25:23.327 }, 00:25:23.327 "method": "bdev_nvme_attach_controller" 00:25:23.327 },{ 00:25:23.327 "params": { 00:25:23.327 "name": "Nvme6", 00:25:23.327 "trtype": "tcp", 00:25:23.327 "traddr": "10.0.0.2", 00:25:23.327 "adrfam": "ipv4", 00:25:23.327 "trsvcid": "4420", 00:25:23.327 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:23.327 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:23.327 "hdgst": false, 00:25:23.327 "ddgst": false 00:25:23.327 }, 00:25:23.327 "method": "bdev_nvme_attach_controller" 00:25:23.327 },{ 00:25:23.327 "params": { 00:25:23.327 "name": "Nvme7", 00:25:23.327 "trtype": "tcp", 00:25:23.327 "traddr": "10.0.0.2", 00:25:23.327 "adrfam": "ipv4", 00:25:23.327 "trsvcid": "4420", 00:25:23.327 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:23.327 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:23.327 "hdgst": false, 00:25:23.327 "ddgst": false 00:25:23.327 }, 00:25:23.327 "method": "bdev_nvme_attach_controller" 00:25:23.327 },{ 00:25:23.327 "params": { 00:25:23.327 "name": "Nvme8", 00:25:23.327 "trtype": "tcp", 00:25:23.327 "traddr": "10.0.0.2", 00:25:23.327 "adrfam": "ipv4", 00:25:23.327 "trsvcid": "4420", 00:25:23.327 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:23.327 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:23.327 "hdgst": false, 00:25:23.327 "ddgst": false 00:25:23.327 }, 00:25:23.327 "method": "bdev_nvme_attach_controller" 00:25:23.327 },{ 00:25:23.327 "params": { 00:25:23.327 "name": "Nvme9", 00:25:23.327 "trtype": "tcp", 00:25:23.327 "traddr": "10.0.0.2", 00:25:23.327 "adrfam": "ipv4", 00:25:23.327 "trsvcid": "4420", 00:25:23.327 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:23.327 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:23.327 "hdgst": false, 00:25:23.327 "ddgst": false 00:25:23.327 }, 00:25:23.327 "method": "bdev_nvme_attach_controller" 00:25:23.327 },{ 00:25:23.327 "params": { 00:25:23.327 "name": "Nvme10", 00:25:23.327 "trtype": "tcp", 00:25:23.327 "traddr": "10.0.0.2", 00:25:23.327 "adrfam": "ipv4", 00:25:23.327 "trsvcid": "4420", 00:25:23.327 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:23.327 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:23.327 "hdgst": false, 00:25:23.327 "ddgst": false 00:25:23.327 }, 00:25:23.327 "method": "bdev_nvme_attach_controller" 00:25:23.327 }' 00:25:23.327 [2024-06-09 23:07:51.319986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.327 [2024-06-09 23:07:51.382334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.712 Running I/O for 1 seconds... 00:25:25.711 00:25:25.711 Latency(us) 00:25:25.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.711 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.711 Verification LBA range: start 0x0 length 0x400 00:25:25.711 Nvme1n1 : 1.11 437.48 27.34 0.00 0.00 139192.24 13325.65 117090.99 00:25:25.711 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.711 Verification LBA range: start 0x0 length 0x400 00:25:25.711 Nvme2n1 : 1.12 429.59 26.85 0.00 0.00 140987.88 12615.68 131945.81 00:25:25.711 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.711 Verification LBA range: start 0x0 length 0x400 00:25:25.711 Nvme3n1 : 1.07 452.64 28.29 0.00 0.00 137401.55 10758.83 108789.76 00:25:25.711 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.711 Verification LBA range: start 0x0 length 0x400 00:25:25.711 Nvme4n1 : 1.08 370.00 23.12 0.00 0.00 165363.68 29928.11 151169.71 00:25:25.711 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.711 Verification LBA range: start 0x0 length 0x400 00:25:25.711 Nvme5n1 : 1.10 395.98 24.75 0.00 0.00 155089.85 9011.20 132819.63 00:25:25.711 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.711 Verification LBA range: start 0x0 length 0x400 00:25:25.711 Nvme6n1 : 1.10 361.12 22.57 0.00 0.00 168814.62 10540.37 153791.15 00:25:25.711 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.711 Verification LBA range: start 0x0 length 0x400 00:25:25.711 Nvme7n1 : 1.11 434.63 27.16 0.00 0.00 133785.37 15182.51 110537.39 00:25:25.711 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.711 Verification LBA range: start 0x0 length 0x400 00:25:25.711 Nvme8n1 : 1.09 365.36 22.84 0.00 0.00 163324.05 14636.37 149422.08 00:25:25.711 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.711 Verification LBA range: start 0x0 length 0x400 00:25:25.711 Nvme9n1 : 1.08 451.15 28.20 0.00 0.00 131752.71 10540.37 110974.29 00:25:25.711 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:25.711 Verification LBA range: start 0x0 length 0x400 00:25:25.711 Nvme10n1 : 1.09 446.80 27.92 0.00 0.00 132026.38 10813.44 110100.48 00:25:25.711 =================================================================================================================== 00:25:25.711 Total : 4144.75 259.05 0.00 0.00 145584.18 9011.20 153791.15 00:25:25.973 23:07:53 -- target/shutdown.sh@93 -- # stoptarget 00:25:25.973 23:07:53 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:25.973 23:07:53 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:25.973 23:07:53 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:25.973 23:07:53 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:25.973 23:07:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:25.973 23:07:53 -- nvmf/common.sh@116 -- # sync 00:25:25.973 23:07:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:25.973 23:07:53 -- nvmf/common.sh@119 -- # set +e 00:25:25.973 23:07:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:25.973 23:07:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:25.973 rmmod nvme_tcp 00:25:25.973 rmmod nvme_fabrics 00:25:25.973 rmmod nvme_keyring 00:25:25.973 23:07:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:25.973 23:07:54 -- nvmf/common.sh@123 -- # set -e 00:25:25.973 23:07:54 -- nvmf/common.sh@124 -- # return 0 00:25:25.973 23:07:54 -- nvmf/common.sh@477 -- # '[' -n 10506 ']' 00:25:25.973 23:07:54 -- nvmf/common.sh@478 -- # killprocess 10506 00:25:25.973 23:07:54 -- common/autotest_common.sh@926 -- # '[' -z 10506 ']' 00:25:25.973 23:07:54 -- common/autotest_common.sh@930 -- # kill -0 10506 00:25:25.973 23:07:54 -- common/autotest_common.sh@931 -- # uname 00:25:25.973 23:07:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:25.973 23:07:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 10506 00:25:25.973 23:07:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:25.973 23:07:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:25.973 23:07:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 10506' 00:25:25.973 killing process with pid 10506 00:25:25.973 23:07:54 -- common/autotest_common.sh@945 -- # kill 10506 00:25:25.973 23:07:54 -- common/autotest_common.sh@950 -- # wait 10506 00:25:26.232 23:07:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:26.232 23:07:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:26.232 23:07:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:26.232 23:07:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:26.232 23:07:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:26.232 23:07:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.232 23:07:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.232 23:07:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.776 23:07:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:28.776 00:25:28.776 real 0m16.283s 00:25:28.776 user 0m33.170s 00:25:28.776 sys 0m6.418s 00:25:28.776 23:07:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.776 23:07:56 -- common/autotest_common.sh@10 -- # set +x 00:25:28.776 ************************************ 00:25:28.776 END TEST nvmf_shutdown_tc1 00:25:28.776 ************************************ 00:25:28.776 23:07:56 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:28.776 23:07:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:28.776 23:07:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:28.776 23:07:56 -- common/autotest_common.sh@10 -- # set +x 00:25:28.776 ************************************ 00:25:28.776 START TEST nvmf_shutdown_tc2 00:25:28.776 ************************************ 00:25:28.776 23:07:56 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:25:28.776 23:07:56 -- target/shutdown.sh@98 -- # starttarget 00:25:28.776 23:07:56 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:28.776 23:07:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:28.776 23:07:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.776 23:07:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:28.776 23:07:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:28.776 23:07:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:28.776 23:07:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.776 23:07:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.776 23:07:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.776 23:07:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:28.776 23:07:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:28.776 23:07:56 -- common/autotest_common.sh@10 -- # set +x 00:25:28.776 23:07:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:28.776 23:07:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:28.776 23:07:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:28.776 23:07:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:28.776 23:07:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:28.776 23:07:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:28.776 23:07:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:28.776 23:07:56 -- nvmf/common.sh@294 -- # net_devs=() 00:25:28.776 23:07:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:28.776 23:07:56 -- nvmf/common.sh@295 -- # e810=() 00:25:28.776 23:07:56 -- nvmf/common.sh@295 -- # local -ga e810 00:25:28.776 23:07:56 -- nvmf/common.sh@296 -- # x722=() 00:25:28.776 23:07:56 -- nvmf/common.sh@296 -- # local -ga x722 00:25:28.776 23:07:56 -- nvmf/common.sh@297 -- # mlx=() 00:25:28.776 23:07:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:28.776 23:07:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.776 23:07:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.776 23:07:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.776 23:07:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.776 23:07:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.776 23:07:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.776 23:07:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.776 23:07:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.776 23:07:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.776 23:07:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.776 23:07:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.776 23:07:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:28.776 23:07:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:28.776 23:07:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:28.776 23:07:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:28.776 23:07:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:28.776 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:28.776 23:07:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:28.776 23:07:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:28.776 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:28.776 23:07:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:28.776 23:07:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:28.776 23:07:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.776 23:07:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:28.776 23:07:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.776 23:07:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:28.776 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:28.776 23:07:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.776 23:07:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:28.776 23:07:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.776 23:07:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:28.776 23:07:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.776 23:07:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:28.776 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:28.776 23:07:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.776 23:07:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:28.776 23:07:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:28.776 23:07:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:28.776 23:07:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:28.776 23:07:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.776 23:07:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.776 23:07:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.776 23:07:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:28.776 23:07:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.776 23:07:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.776 23:07:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:28.776 23:07:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.776 23:07:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.776 23:07:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:28.776 23:07:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:28.776 23:07:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.776 23:07:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.776 23:07:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.777 23:07:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.777 23:07:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:28.777 23:07:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.777 23:07:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.777 23:07:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.777 23:07:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:28.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:25:28.777 00:25:28.777 --- 10.0.0.2 ping statistics --- 00:25:28.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.777 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:25:28.777 23:07:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:25:28.777 00:25:28.777 --- 10.0.0.1 ping statistics --- 00:25:28.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.777 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:25:28.777 23:07:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.777 23:07:56 -- nvmf/common.sh@410 -- # return 0 00:25:28.777 23:07:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:28.777 23:07:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.777 23:07:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:28.777 23:07:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:28.777 23:07:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.777 23:07:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:28.777 23:07:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:28.777 23:07:56 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:28.777 23:07:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:28.777 23:07:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:28.777 23:07:56 -- common/autotest_common.sh@10 -- # set +x 00:25:28.777 23:07:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:28.777 23:07:56 -- nvmf/common.sh@469 -- # nvmfpid=12468 00:25:28.777 23:07:56 -- nvmf/common.sh@470 -- # waitforlisten 12468 00:25:28.777 23:07:56 -- common/autotest_common.sh@819 -- # '[' -z 12468 ']' 00:25:28.777 23:07:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.777 23:07:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:28.777 23:07:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.777 23:07:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:28.777 23:07:56 -- common/autotest_common.sh@10 -- # set +x 00:25:28.777 [2024-06-09 23:07:56.923821] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:28.777 [2024-06-09 23:07:56.923915] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.037 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.037 [2024-06-09 23:07:56.992350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:29.037 [2024-06-09 23:07:57.056162] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:29.037 [2024-06-09 23:07:57.056292] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.037 [2024-06-09 23:07:57.056302] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.037 [2024-06-09 23:07:57.056310] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.037 [2024-06-09 23:07:57.056434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.037 [2024-06-09 23:07:57.056617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:29.037 [2024-06-09 23:07:57.056733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.037 [2024-06-09 23:07:57.056734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:29.610 23:07:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:29.610 23:07:57 -- common/autotest_common.sh@852 -- # return 0 00:25:29.610 23:07:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:29.610 23:07:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:29.610 23:07:57 -- common/autotest_common.sh@10 -- # set +x 00:25:29.610 23:07:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.610 23:07:57 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:29.610 23:07:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.610 23:07:57 -- common/autotest_common.sh@10 -- # set +x 00:25:29.610 [2024-06-09 23:07:57.715559] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.610 23:07:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.610 23:07:57 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:29.610 23:07:57 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:29.610 23:07:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:29.610 23:07:57 -- common/autotest_common.sh@10 -- # set +x 00:25:29.610 23:07:57 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:29.610 23:07:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:29.610 23:07:57 -- target/shutdown.sh@28 -- # cat 00:25:29.610 23:07:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:29.610 23:07:57 -- target/shutdown.sh@28 -- # cat 00:25:29.610 23:07:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:29.610 23:07:57 -- target/shutdown.sh@28 -- # cat 00:25:29.610 23:07:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:29.610 23:07:57 -- target/shutdown.sh@28 -- # cat 00:25:29.610 23:07:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:29.610 23:07:57 -- target/shutdown.sh@28 -- # cat 00:25:29.610 23:07:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:29.610 23:07:57 -- target/shutdown.sh@28 -- # cat 00:25:29.610 23:07:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:29.610 23:07:57 -- target/shutdown.sh@28 -- # cat 00:25:29.610 23:07:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:29.610 23:07:57 -- target/shutdown.sh@28 -- # cat 00:25:29.610 23:07:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:29.610 23:07:57 -- target/shutdown.sh@28 -- # cat 00:25:29.610 23:07:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:29.610 23:07:57 -- target/shutdown.sh@28 -- # cat 00:25:29.610 23:07:57 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:29.610 23:07:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.610 23:07:57 -- common/autotest_common.sh@10 -- # set +x 00:25:29.871 Malloc1 00:25:29.871 [2024-06-09 23:07:57.815669] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.871 Malloc2 00:25:29.871 Malloc3 00:25:29.871 Malloc4 00:25:29.871 Malloc5 00:25:29.871 Malloc6 00:25:29.871 Malloc7 00:25:30.133 Malloc8 00:25:30.133 Malloc9 00:25:30.133 Malloc10 00:25:30.133 23:07:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.133 23:07:58 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:30.133 23:07:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:30.133 23:07:58 -- common/autotest_common.sh@10 -- # set +x 00:25:30.133 23:07:58 -- target/shutdown.sh@102 -- # perfpid=12849 00:25:30.133 23:07:58 -- target/shutdown.sh@103 -- # waitforlisten 12849 /var/tmp/bdevperf.sock 00:25:30.133 23:07:58 -- common/autotest_common.sh@819 -- # '[' -z 12849 ']' 00:25:30.133 23:07:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:30.133 23:07:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:30.133 23:07:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:30.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:30.133 23:07:58 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:30.133 23:07:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:30.133 23:07:58 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:30.133 23:07:58 -- common/autotest_common.sh@10 -- # set +x 00:25:30.133 23:07:58 -- nvmf/common.sh@520 -- # config=() 00:25:30.133 23:07:58 -- nvmf/common.sh@520 -- # local subsystem config 00:25:30.133 23:07:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.133 { 00:25:30.133 "params": { 00:25:30.133 "name": "Nvme$subsystem", 00:25:30.133 "trtype": "$TEST_TRANSPORT", 00:25:30.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.133 "adrfam": "ipv4", 00:25:30.133 "trsvcid": "$NVMF_PORT", 00:25:30.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.133 "hdgst": ${hdgst:-false}, 00:25:30.133 "ddgst": ${ddgst:-false} 00:25:30.133 }, 00:25:30.133 "method": "bdev_nvme_attach_controller" 00:25:30.133 } 00:25:30.133 EOF 00:25:30.133 )") 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # cat 00:25:30.133 23:07:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.133 { 00:25:30.133 "params": { 00:25:30.133 "name": "Nvme$subsystem", 00:25:30.133 "trtype": "$TEST_TRANSPORT", 00:25:30.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.133 "adrfam": "ipv4", 00:25:30.133 "trsvcid": "$NVMF_PORT", 00:25:30.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.133 "hdgst": ${hdgst:-false}, 00:25:30.133 "ddgst": ${ddgst:-false} 00:25:30.133 }, 00:25:30.133 "method": "bdev_nvme_attach_controller" 00:25:30.133 } 00:25:30.133 EOF 00:25:30.133 )") 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # cat 00:25:30.133 23:07:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.133 { 00:25:30.133 "params": { 00:25:30.133 "name": "Nvme$subsystem", 00:25:30.133 "trtype": "$TEST_TRANSPORT", 00:25:30.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.133 "adrfam": "ipv4", 00:25:30.133 "trsvcid": "$NVMF_PORT", 00:25:30.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.133 "hdgst": ${hdgst:-false}, 00:25:30.133 "ddgst": ${ddgst:-false} 00:25:30.133 }, 00:25:30.133 "method": "bdev_nvme_attach_controller" 00:25:30.133 } 00:25:30.133 EOF 00:25:30.133 )") 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # cat 00:25:30.133 23:07:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.133 { 00:25:30.133 "params": { 00:25:30.133 "name": "Nvme$subsystem", 00:25:30.133 "trtype": "$TEST_TRANSPORT", 00:25:30.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.133 "adrfam": "ipv4", 00:25:30.133 "trsvcid": "$NVMF_PORT", 00:25:30.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.133 "hdgst": ${hdgst:-false}, 00:25:30.133 "ddgst": ${ddgst:-false} 00:25:30.133 }, 00:25:30.133 "method": "bdev_nvme_attach_controller" 00:25:30.133 } 00:25:30.133 EOF 00:25:30.133 )") 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # cat 00:25:30.133 23:07:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.133 { 00:25:30.133 "params": { 00:25:30.133 "name": "Nvme$subsystem", 00:25:30.133 "trtype": "$TEST_TRANSPORT", 00:25:30.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.133 "adrfam": "ipv4", 00:25:30.133 "trsvcid": "$NVMF_PORT", 00:25:30.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.133 "hdgst": ${hdgst:-false}, 00:25:30.133 "ddgst": ${ddgst:-false} 00:25:30.133 }, 00:25:30.133 "method": "bdev_nvme_attach_controller" 00:25:30.133 } 00:25:30.133 EOF 00:25:30.133 )") 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # cat 00:25:30.133 23:07:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.133 { 00:25:30.133 "params": { 00:25:30.133 "name": "Nvme$subsystem", 00:25:30.133 "trtype": "$TEST_TRANSPORT", 00:25:30.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.133 "adrfam": "ipv4", 00:25:30.133 "trsvcid": "$NVMF_PORT", 00:25:30.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.133 "hdgst": ${hdgst:-false}, 00:25:30.133 "ddgst": ${ddgst:-false} 00:25:30.133 }, 00:25:30.133 "method": "bdev_nvme_attach_controller" 00:25:30.133 } 00:25:30.133 EOF 00:25:30.133 )") 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # cat 00:25:30.133 [2024-06-09 23:07:58.264194] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:30.133 [2024-06-09 23:07:58.264247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12849 ] 00:25:30.133 23:07:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.133 { 00:25:30.133 "params": { 00:25:30.133 "name": "Nvme$subsystem", 00:25:30.133 "trtype": "$TEST_TRANSPORT", 00:25:30.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.133 "adrfam": "ipv4", 00:25:30.133 "trsvcid": "$NVMF_PORT", 00:25:30.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.133 "hdgst": ${hdgst:-false}, 00:25:30.133 "ddgst": ${ddgst:-false} 00:25:30.133 }, 00:25:30.133 "method": "bdev_nvme_attach_controller" 00:25:30.133 } 00:25:30.133 EOF 00:25:30.133 )") 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # cat 00:25:30.133 23:07:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.133 { 00:25:30.133 "params": { 00:25:30.133 "name": "Nvme$subsystem", 00:25:30.133 "trtype": "$TEST_TRANSPORT", 00:25:30.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.133 "adrfam": "ipv4", 00:25:30.133 "trsvcid": "$NVMF_PORT", 00:25:30.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.133 "hdgst": ${hdgst:-false}, 00:25:30.133 "ddgst": ${ddgst:-false} 00:25:30.133 }, 00:25:30.133 "method": "bdev_nvme_attach_controller" 00:25:30.133 } 00:25:30.133 EOF 00:25:30.133 )") 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # cat 00:25:30.133 23:07:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.133 { 00:25:30.133 "params": { 00:25:30.133 "name": "Nvme$subsystem", 00:25:30.133 "trtype": "$TEST_TRANSPORT", 00:25:30.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.133 "adrfam": "ipv4", 00:25:30.133 "trsvcid": "$NVMF_PORT", 00:25:30.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.133 "hdgst": ${hdgst:-false}, 00:25:30.133 "ddgst": ${ddgst:-false} 00:25:30.133 }, 00:25:30.133 "method": "bdev_nvme_attach_controller" 00:25:30.133 } 00:25:30.133 EOF 00:25:30.133 )") 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # cat 00:25:30.133 23:07:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:30.133 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.133 23:07:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:30.133 { 00:25:30.133 "params": { 00:25:30.133 "name": "Nvme$subsystem", 00:25:30.133 "trtype": "$TEST_TRANSPORT", 00:25:30.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:30.133 "adrfam": "ipv4", 00:25:30.133 "trsvcid": "$NVMF_PORT", 00:25:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:30.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:30.134 "hdgst": ${hdgst:-false}, 00:25:30.134 "ddgst": ${ddgst:-false} 00:25:30.134 }, 00:25:30.134 "method": "bdev_nvme_attach_controller" 00:25:30.134 } 00:25:30.134 EOF 00:25:30.134 )") 00:25:30.134 23:07:58 -- nvmf/common.sh@542 -- # cat 00:25:30.134 23:07:58 -- nvmf/common.sh@544 -- # jq . 00:25:30.134 23:07:58 -- nvmf/common.sh@545 -- # IFS=, 00:25:30.134 23:07:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:30.134 "params": { 00:25:30.134 "name": "Nvme1", 00:25:30.134 "trtype": "tcp", 00:25:30.134 "traddr": "10.0.0.2", 00:25:30.134 "adrfam": "ipv4", 00:25:30.134 "trsvcid": "4420", 00:25:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:30.134 "hdgst": false, 00:25:30.134 "ddgst": false 00:25:30.134 }, 00:25:30.134 "method": "bdev_nvme_attach_controller" 00:25:30.134 },{ 00:25:30.134 "params": { 00:25:30.134 "name": "Nvme2", 00:25:30.134 "trtype": "tcp", 00:25:30.134 "traddr": "10.0.0.2", 00:25:30.134 "adrfam": "ipv4", 00:25:30.134 "trsvcid": "4420", 00:25:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:30.134 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:30.134 "hdgst": false, 00:25:30.134 "ddgst": false 00:25:30.134 }, 00:25:30.134 "method": "bdev_nvme_attach_controller" 00:25:30.134 },{ 00:25:30.134 "params": { 00:25:30.134 "name": "Nvme3", 00:25:30.134 "trtype": "tcp", 00:25:30.134 "traddr": "10.0.0.2", 00:25:30.134 "adrfam": "ipv4", 00:25:30.134 "trsvcid": "4420", 00:25:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:30.134 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:30.134 "hdgst": false, 00:25:30.134 "ddgst": false 00:25:30.134 }, 00:25:30.134 "method": "bdev_nvme_attach_controller" 00:25:30.134 },{ 00:25:30.134 "params": { 00:25:30.134 "name": "Nvme4", 00:25:30.134 "trtype": "tcp", 00:25:30.134 "traddr": "10.0.0.2", 00:25:30.134 "adrfam": "ipv4", 00:25:30.134 "trsvcid": "4420", 00:25:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:30.134 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:30.134 "hdgst": false, 00:25:30.134 "ddgst": false 00:25:30.134 }, 00:25:30.134 "method": "bdev_nvme_attach_controller" 00:25:30.134 },{ 00:25:30.134 "params": { 00:25:30.134 "name": "Nvme5", 00:25:30.134 "trtype": "tcp", 00:25:30.134 "traddr": "10.0.0.2", 00:25:30.134 "adrfam": "ipv4", 00:25:30.134 "trsvcid": "4420", 00:25:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:30.134 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:30.134 "hdgst": false, 00:25:30.134 "ddgst": false 00:25:30.134 }, 00:25:30.134 "method": "bdev_nvme_attach_controller" 00:25:30.134 },{ 00:25:30.134 "params": { 00:25:30.134 "name": "Nvme6", 00:25:30.134 "trtype": "tcp", 00:25:30.134 "traddr": "10.0.0.2", 00:25:30.134 "adrfam": "ipv4", 00:25:30.134 "trsvcid": "4420", 00:25:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:30.134 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:30.134 "hdgst": false, 00:25:30.134 "ddgst": false 00:25:30.134 }, 00:25:30.134 "method": "bdev_nvme_attach_controller" 00:25:30.134 },{ 00:25:30.134 "params": { 00:25:30.134 "name": "Nvme7", 00:25:30.134 "trtype": "tcp", 00:25:30.134 "traddr": "10.0.0.2", 00:25:30.134 "adrfam": "ipv4", 00:25:30.134 "trsvcid": "4420", 00:25:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:30.134 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:30.134 "hdgst": false, 00:25:30.134 "ddgst": false 00:25:30.134 }, 00:25:30.134 "method": "bdev_nvme_attach_controller" 00:25:30.134 },{ 00:25:30.134 "params": { 00:25:30.134 "name": "Nvme8", 00:25:30.134 "trtype": "tcp", 00:25:30.134 "traddr": "10.0.0.2", 00:25:30.134 "adrfam": "ipv4", 00:25:30.134 "trsvcid": "4420", 00:25:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:30.134 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:30.134 "hdgst": false, 00:25:30.134 "ddgst": false 00:25:30.134 }, 00:25:30.134 "method": "bdev_nvme_attach_controller" 00:25:30.134 },{ 00:25:30.134 "params": { 00:25:30.134 "name": "Nvme9", 00:25:30.134 "trtype": "tcp", 00:25:30.134 "traddr": "10.0.0.2", 00:25:30.134 "adrfam": "ipv4", 00:25:30.134 "trsvcid": "4420", 00:25:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:30.134 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:30.134 "hdgst": false, 00:25:30.134 "ddgst": false 00:25:30.134 }, 00:25:30.134 "method": "bdev_nvme_attach_controller" 00:25:30.134 },{ 00:25:30.134 "params": { 00:25:30.134 "name": "Nvme10", 00:25:30.134 "trtype": "tcp", 00:25:30.134 "traddr": "10.0.0.2", 00:25:30.134 "adrfam": "ipv4", 00:25:30.134 "trsvcid": "4420", 00:25:30.134 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:30.134 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:30.134 "hdgst": false, 00:25:30.134 "ddgst": false 00:25:30.134 }, 00:25:30.134 "method": "bdev_nvme_attach_controller" 00:25:30.134 }' 00:25:30.394 [2024-06-09 23:07:58.323450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.394 [2024-06-09 23:07:58.386457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.777 Running I/O for 10 seconds... 00:25:32.349 23:08:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:32.349 23:08:00 -- common/autotest_common.sh@852 -- # return 0 00:25:32.349 23:08:00 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:32.349 23:08:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.349 23:08:00 -- common/autotest_common.sh@10 -- # set +x 00:25:32.349 23:08:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.349 23:08:00 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:32.349 23:08:00 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:32.349 23:08:00 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:32.349 23:08:00 -- target/shutdown.sh@57 -- # local ret=1 00:25:32.349 23:08:00 -- target/shutdown.sh@58 -- # local i 00:25:32.349 23:08:00 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:32.349 23:08:00 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:32.349 23:08:00 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:32.349 23:08:00 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:32.349 23:08:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.349 23:08:00 -- common/autotest_common.sh@10 -- # set +x 00:25:32.349 23:08:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.349 23:08:00 -- target/shutdown.sh@60 -- # read_io_count=129 00:25:32.349 23:08:00 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:25:32.349 23:08:00 -- target/shutdown.sh@64 -- # ret=0 00:25:32.349 23:08:00 -- target/shutdown.sh@65 -- # break 00:25:32.349 23:08:00 -- target/shutdown.sh@69 -- # return 0 00:25:32.349 23:08:00 -- target/shutdown.sh@109 -- # killprocess 12849 00:25:32.349 23:08:00 -- common/autotest_common.sh@926 -- # '[' -z 12849 ']' 00:25:32.349 23:08:00 -- common/autotest_common.sh@930 -- # kill -0 12849 00:25:32.349 23:08:00 -- common/autotest_common.sh@931 -- # uname 00:25:32.349 23:08:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:32.349 23:08:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 12849 00:25:32.349 23:08:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:32.349 23:08:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:32.349 23:08:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 12849' 00:25:32.349 killing process with pid 12849 00:25:32.349 23:08:00 -- common/autotest_common.sh@945 -- # kill 12849 00:25:32.349 23:08:00 -- common/autotest_common.sh@950 -- # wait 12849 00:25:32.610 Received shutdown signal, test time was about 0.601148 seconds 00:25:32.610 00:25:32.610 Latency(us) 00:25:32.610 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.610 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.610 Verification LBA range: start 0x0 length 0x400 00:25:32.610 Nvme1n1 : 0.58 392.94 24.56 0.00 0.00 158519.03 15073.28 165150.72 00:25:32.610 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.610 Verification LBA range: start 0x0 length 0x400 00:25:32.610 Nvme2n1 : 0.57 396.53 24.78 0.00 0.00 154342.07 17257.81 155538.77 00:25:32.610 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.610 Verification LBA range: start 0x0 length 0x400 00:25:32.610 Nvme3n1 : 0.58 472.58 29.54 0.00 0.00 128201.25 2539.52 112721.92 00:25:32.610 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.610 Verification LBA range: start 0x0 length 0x400 00:25:32.610 Nvme4n1 : 0.57 397.24 24.83 0.00 0.00 150253.50 10540.37 147674.45 00:25:32.610 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.610 Verification LBA range: start 0x0 length 0x400 00:25:32.610 Nvme5n1 : 0.57 398.30 24.89 0.00 0.00 145962.76 16602.45 130198.19 00:25:32.610 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.610 Verification LBA range: start 0x0 length 0x400 00:25:32.610 Nvme6n1 : 0.56 488.82 30.55 0.00 0.00 118562.81 4096.00 117964.80 00:25:32.610 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.610 Verification LBA range: start 0x0 length 0x400 00:25:32.610 Nvme7n1 : 0.58 549.78 34.36 0.00 0.00 103774.46 13981.01 109226.67 00:25:32.610 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.610 Verification LBA range: start 0x0 length 0x400 00:25:32.610 Nvme8n1 : 0.60 316.53 19.78 0.00 0.00 165302.49 23592.96 152043.52 00:25:32.610 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.610 Verification LBA range: start 0x0 length 0x400 00:25:32.610 Nvme9n1 : 0.58 390.62 24.41 0.00 0.00 142082.93 12178.77 136314.88 00:25:32.610 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:32.610 Verification LBA range: start 0x0 length 0x400 00:25:32.610 Nvme10n1 : 0.55 411.03 25.69 0.00 0.00 132306.07 12178.77 116217.17 00:25:32.610 =================================================================================================================== 00:25:32.611 Total : 4214.36 263.40 0.00 0.00 137572.55 2539.52 165150.72 00:25:32.611 23:08:00 -- target/shutdown.sh@112 -- # sleep 1 00:25:33.553 23:08:01 -- target/shutdown.sh@113 -- # kill -0 12468 00:25:33.553 23:08:01 -- target/shutdown.sh@115 -- # stoptarget 00:25:33.553 23:08:01 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:33.553 23:08:01 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:33.553 23:08:01 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:33.553 23:08:01 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:33.553 23:08:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:33.553 23:08:01 -- nvmf/common.sh@116 -- # sync 00:25:33.553 23:08:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:33.553 23:08:01 -- nvmf/common.sh@119 -- # set +e 00:25:33.553 23:08:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:33.553 23:08:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:33.553 rmmod nvme_tcp 00:25:33.553 rmmod nvme_fabrics 00:25:33.814 rmmod nvme_keyring 00:25:33.814 23:08:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:33.814 23:08:01 -- nvmf/common.sh@123 -- # set -e 00:25:33.814 23:08:01 -- nvmf/common.sh@124 -- # return 0 00:25:33.814 23:08:01 -- nvmf/common.sh@477 -- # '[' -n 12468 ']' 00:25:33.814 23:08:01 -- nvmf/common.sh@478 -- # killprocess 12468 00:25:33.814 23:08:01 -- common/autotest_common.sh@926 -- # '[' -z 12468 ']' 00:25:33.814 23:08:01 -- common/autotest_common.sh@930 -- # kill -0 12468 00:25:33.814 23:08:01 -- common/autotest_common.sh@931 -- # uname 00:25:33.814 23:08:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:33.814 23:08:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 12468 00:25:33.814 23:08:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:33.814 23:08:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:33.814 23:08:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 12468' 00:25:33.814 killing process with pid 12468 00:25:33.814 23:08:01 -- common/autotest_common.sh@945 -- # kill 12468 00:25:33.814 23:08:01 -- common/autotest_common.sh@950 -- # wait 12468 00:25:34.075 23:08:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:34.075 23:08:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:34.075 23:08:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:34.075 23:08:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:34.075 23:08:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:34.075 23:08:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.075 23:08:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.075 23:08:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.995 23:08:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:35.995 00:25:35.995 real 0m7.675s 00:25:35.995 user 0m22.581s 00:25:35.995 sys 0m1.239s 00:25:35.995 23:08:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:35.995 23:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:35.995 ************************************ 00:25:35.995 END TEST nvmf_shutdown_tc2 00:25:35.995 ************************************ 00:25:36.257 23:08:04 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:36.257 23:08:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:36.257 23:08:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:36.257 23:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:36.257 ************************************ 00:25:36.257 START TEST nvmf_shutdown_tc3 00:25:36.257 ************************************ 00:25:36.257 23:08:04 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:25:36.257 23:08:04 -- target/shutdown.sh@120 -- # starttarget 00:25:36.257 23:08:04 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:36.257 23:08:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:36.257 23:08:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.257 23:08:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:36.257 23:08:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:36.257 23:08:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:36.257 23:08:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.257 23:08:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.257 23:08:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.257 23:08:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:36.257 23:08:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:36.257 23:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:36.257 23:08:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:36.257 23:08:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:36.257 23:08:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:36.257 23:08:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:36.257 23:08:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:36.257 23:08:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:36.257 23:08:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:36.257 23:08:04 -- nvmf/common.sh@294 -- # net_devs=() 00:25:36.257 23:08:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:36.257 23:08:04 -- nvmf/common.sh@295 -- # e810=() 00:25:36.257 23:08:04 -- nvmf/common.sh@295 -- # local -ga e810 00:25:36.257 23:08:04 -- nvmf/common.sh@296 -- # x722=() 00:25:36.257 23:08:04 -- nvmf/common.sh@296 -- # local -ga x722 00:25:36.257 23:08:04 -- nvmf/common.sh@297 -- # mlx=() 00:25:36.257 23:08:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:36.257 23:08:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.257 23:08:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.257 23:08:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.257 23:08:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.257 23:08:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.257 23:08:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.257 23:08:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.257 23:08:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.257 23:08:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.257 23:08:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.257 23:08:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.257 23:08:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:36.257 23:08:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:36.257 23:08:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:36.257 23:08:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:36.257 23:08:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:36.257 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:36.257 23:08:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:36.257 23:08:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:36.257 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:36.257 23:08:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:36.257 23:08:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:36.257 23:08:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:36.257 23:08:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.257 23:08:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:36.257 23:08:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.257 23:08:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:36.258 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:36.258 23:08:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.258 23:08:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:36.258 23:08:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.258 23:08:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:36.258 23:08:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.258 23:08:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:36.258 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:36.258 23:08:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.258 23:08:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:36.258 23:08:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:36.258 23:08:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:36.258 23:08:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:36.258 23:08:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:36.258 23:08:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.258 23:08:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.258 23:08:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:36.258 23:08:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:36.258 23:08:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:36.258 23:08:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:36.258 23:08:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:36.258 23:08:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:36.258 23:08:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.258 23:08:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:36.258 23:08:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:36.258 23:08:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:36.258 23:08:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:36.258 23:08:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:36.258 23:08:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:36.258 23:08:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:36.258 23:08:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:36.520 23:08:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:36.520 23:08:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:36.520 23:08:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:36.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:25:36.520 00:25:36.520 --- 10.0.0.2 ping statistics --- 00:25:36.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.520 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:25:36.520 23:08:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:36.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.515 ms 00:25:36.520 00:25:36.520 --- 10.0.0.1 ping statistics --- 00:25:36.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.520 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:25:36.520 23:08:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.520 23:08:04 -- nvmf/common.sh@410 -- # return 0 00:25:36.520 23:08:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:36.520 23:08:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.520 23:08:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:36.520 23:08:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:36.520 23:08:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.520 23:08:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:36.520 23:08:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:36.520 23:08:04 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:36.520 23:08:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:36.520 23:08:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:36.520 23:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:36.520 23:08:04 -- nvmf/common.sh@469 -- # nvmfpid=14153 00:25:36.520 23:08:04 -- nvmf/common.sh@470 -- # waitforlisten 14153 00:25:36.520 23:08:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:36.520 23:08:04 -- common/autotest_common.sh@819 -- # '[' -z 14153 ']' 00:25:36.520 23:08:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.520 23:08:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:36.520 23:08:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.520 23:08:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:36.520 23:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:36.520 [2024-06-09 23:08:04.670645] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:36.520 [2024-06-09 23:08:04.670714] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.782 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.782 [2024-06-09 23:08:04.743520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:36.782 [2024-06-09 23:08:04.814985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:36.782 [2024-06-09 23:08:04.815124] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.782 [2024-06-09 23:08:04.815134] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.782 [2024-06-09 23:08:04.815143] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.782 [2024-06-09 23:08:04.815291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.782 [2024-06-09 23:08:04.815451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:36.782 [2024-06-09 23:08:04.815617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.782 [2024-06-09 23:08:04.815617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:37.355 23:08:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:37.355 23:08:05 -- common/autotest_common.sh@852 -- # return 0 00:25:37.355 23:08:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:37.355 23:08:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:37.355 23:08:05 -- common/autotest_common.sh@10 -- # set +x 00:25:37.355 23:08:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.355 23:08:05 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:37.355 23:08:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.355 23:08:05 -- common/autotest_common.sh@10 -- # set +x 00:25:37.355 [2024-06-09 23:08:05.481571] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.355 23:08:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.355 23:08:05 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:37.355 23:08:05 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:37.355 23:08:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:37.355 23:08:05 -- common/autotest_common.sh@10 -- # set +x 00:25:37.355 23:08:05 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:37.355 23:08:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:37.355 23:08:05 -- target/shutdown.sh@28 -- # cat 00:25:37.355 23:08:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:37.355 23:08:05 -- target/shutdown.sh@28 -- # cat 00:25:37.355 23:08:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:37.355 23:08:05 -- target/shutdown.sh@28 -- # cat 00:25:37.355 23:08:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:37.355 23:08:05 -- target/shutdown.sh@28 -- # cat 00:25:37.355 23:08:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:37.355 23:08:05 -- target/shutdown.sh@28 -- # cat 00:25:37.355 23:08:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:37.355 23:08:05 -- target/shutdown.sh@28 -- # cat 00:25:37.355 23:08:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:37.355 23:08:05 -- target/shutdown.sh@28 -- # cat 00:25:37.355 23:08:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:37.355 23:08:05 -- target/shutdown.sh@28 -- # cat 00:25:37.617 23:08:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:37.617 23:08:05 -- target/shutdown.sh@28 -- # cat 00:25:37.617 23:08:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:37.617 23:08:05 -- target/shutdown.sh@28 -- # cat 00:25:37.617 23:08:05 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:37.617 23:08:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.617 23:08:05 -- common/autotest_common.sh@10 -- # set +x 00:25:37.617 Malloc1 00:25:37.617 [2024-06-09 23:08:05.581656] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.617 Malloc2 00:25:37.617 Malloc3 00:25:37.617 Malloc4 00:25:37.617 Malloc5 00:25:37.617 Malloc6 00:25:37.617 Malloc7 00:25:37.879 Malloc8 00:25:37.879 Malloc9 00:25:37.879 Malloc10 00:25:37.879 23:08:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.879 23:08:05 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:37.879 23:08:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:37.879 23:08:05 -- common/autotest_common.sh@10 -- # set +x 00:25:37.879 23:08:05 -- target/shutdown.sh@124 -- # perfpid=14382 00:25:37.879 23:08:05 -- target/shutdown.sh@125 -- # waitforlisten 14382 /var/tmp/bdevperf.sock 00:25:37.879 23:08:05 -- common/autotest_common.sh@819 -- # '[' -z 14382 ']' 00:25:37.879 23:08:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:37.879 23:08:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:37.879 23:08:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:37.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:37.879 23:08:05 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:37.879 23:08:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:37.879 23:08:05 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:37.879 23:08:05 -- common/autotest_common.sh@10 -- # set +x 00:25:37.879 23:08:05 -- nvmf/common.sh@520 -- # config=() 00:25:37.879 23:08:05 -- nvmf/common.sh@520 -- # local subsystem config 00:25:37.879 23:08:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:37.879 23:08:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:37.879 { 00:25:37.879 "params": { 00:25:37.879 "name": "Nvme$subsystem", 00:25:37.879 "trtype": "$TEST_TRANSPORT", 00:25:37.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.879 "adrfam": "ipv4", 00:25:37.879 "trsvcid": "$NVMF_PORT", 00:25:37.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.879 "hdgst": ${hdgst:-false}, 00:25:37.879 "ddgst": ${ddgst:-false} 00:25:37.879 }, 00:25:37.879 "method": "bdev_nvme_attach_controller" 00:25:37.879 } 00:25:37.879 EOF 00:25:37.879 )") 00:25:37.879 23:08:05 -- nvmf/common.sh@542 -- # cat 00:25:37.879 23:08:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:37.879 23:08:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:37.879 { 00:25:37.879 "params": { 00:25:37.879 "name": "Nvme$subsystem", 00:25:37.879 "trtype": "$TEST_TRANSPORT", 00:25:37.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.879 "adrfam": "ipv4", 00:25:37.879 "trsvcid": "$NVMF_PORT", 00:25:37.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.879 "hdgst": ${hdgst:-false}, 00:25:37.879 "ddgst": ${ddgst:-false} 00:25:37.879 }, 00:25:37.879 "method": "bdev_nvme_attach_controller" 00:25:37.879 } 00:25:37.879 EOF 00:25:37.879 )") 00:25:37.879 23:08:05 -- nvmf/common.sh@542 -- # cat 00:25:37.879 23:08:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:37.879 23:08:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:37.879 { 00:25:37.879 "params": { 00:25:37.879 "name": "Nvme$subsystem", 00:25:37.879 "trtype": "$TEST_TRANSPORT", 00:25:37.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.879 "adrfam": "ipv4", 00:25:37.879 "trsvcid": "$NVMF_PORT", 00:25:37.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.879 "hdgst": ${hdgst:-false}, 00:25:37.879 "ddgst": ${ddgst:-false} 00:25:37.879 }, 00:25:37.879 "method": "bdev_nvme_attach_controller" 00:25:37.879 } 00:25:37.879 EOF 00:25:37.879 )") 00:25:37.879 23:08:05 -- nvmf/common.sh@542 -- # cat 00:25:37.879 23:08:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:37.879 23:08:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:37.879 { 00:25:37.879 "params": { 00:25:37.879 "name": "Nvme$subsystem", 00:25:37.879 "trtype": "$TEST_TRANSPORT", 00:25:37.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.879 "adrfam": "ipv4", 00:25:37.879 "trsvcid": "$NVMF_PORT", 00:25:37.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.880 "hdgst": ${hdgst:-false}, 00:25:37.880 "ddgst": ${ddgst:-false} 00:25:37.880 }, 00:25:37.880 "method": "bdev_nvme_attach_controller" 00:25:37.880 } 00:25:37.880 EOF 00:25:37.880 )") 00:25:37.880 23:08:06 -- nvmf/common.sh@542 -- # cat 00:25:37.880 23:08:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:37.880 23:08:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:37.880 { 00:25:37.880 "params": { 00:25:37.880 "name": "Nvme$subsystem", 00:25:37.880 "trtype": "$TEST_TRANSPORT", 00:25:37.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.880 "adrfam": "ipv4", 00:25:37.880 "trsvcid": "$NVMF_PORT", 00:25:37.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.880 "hdgst": ${hdgst:-false}, 00:25:37.880 "ddgst": ${ddgst:-false} 00:25:37.880 }, 00:25:37.880 "method": "bdev_nvme_attach_controller" 00:25:37.880 } 00:25:37.880 EOF 00:25:37.880 )") 00:25:37.880 23:08:06 -- nvmf/common.sh@542 -- # cat 00:25:37.880 23:08:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:37.880 23:08:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:37.880 { 00:25:37.880 "params": { 00:25:37.880 "name": "Nvme$subsystem", 00:25:37.880 "trtype": "$TEST_TRANSPORT", 00:25:37.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.880 "adrfam": "ipv4", 00:25:37.880 "trsvcid": "$NVMF_PORT", 00:25:37.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.880 "hdgst": ${hdgst:-false}, 00:25:37.880 "ddgst": ${ddgst:-false} 00:25:37.880 }, 00:25:37.880 "method": "bdev_nvme_attach_controller" 00:25:37.880 } 00:25:37.880 EOF 00:25:37.880 )") 00:25:37.880 23:08:06 -- nvmf/common.sh@542 -- # cat 00:25:37.880 23:08:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:37.880 23:08:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:37.880 { 00:25:37.880 "params": { 00:25:37.880 "name": "Nvme$subsystem", 00:25:37.880 "trtype": "$TEST_TRANSPORT", 00:25:37.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.880 "adrfam": "ipv4", 00:25:37.880 "trsvcid": "$NVMF_PORT", 00:25:37.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.880 "hdgst": ${hdgst:-false}, 00:25:37.880 "ddgst": ${ddgst:-false} 00:25:37.880 }, 00:25:37.880 "method": "bdev_nvme_attach_controller" 00:25:37.880 } 00:25:37.880 EOF 00:25:37.880 )") 00:25:37.880 23:08:06 -- nvmf/common.sh@542 -- # cat 00:25:37.880 [2024-06-09 23:08:06.034281] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:37.880 [2024-06-09 23:08:06.034344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid14382 ] 00:25:37.880 23:08:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:37.880 23:08:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:37.880 { 00:25:37.880 "params": { 00:25:37.880 "name": "Nvme$subsystem", 00:25:37.880 "trtype": "$TEST_TRANSPORT", 00:25:37.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.880 "adrfam": "ipv4", 00:25:37.880 "trsvcid": "$NVMF_PORT", 00:25:37.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.880 "hdgst": ${hdgst:-false}, 00:25:37.880 "ddgst": ${ddgst:-false} 00:25:37.880 }, 00:25:37.880 "method": "bdev_nvme_attach_controller" 00:25:37.880 } 00:25:37.880 EOF 00:25:37.880 )") 00:25:37.880 23:08:06 -- nvmf/common.sh@542 -- # cat 00:25:37.880 23:08:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:37.880 23:08:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:37.880 { 00:25:37.880 "params": { 00:25:37.880 "name": "Nvme$subsystem", 00:25:37.880 "trtype": "$TEST_TRANSPORT", 00:25:37.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.880 "adrfam": "ipv4", 00:25:37.880 "trsvcid": "$NVMF_PORT", 00:25:37.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.880 "hdgst": ${hdgst:-false}, 00:25:37.880 "ddgst": ${ddgst:-false} 00:25:37.880 }, 00:25:37.880 "method": "bdev_nvme_attach_controller" 00:25:37.880 } 00:25:37.880 EOF 00:25:37.880 )") 00:25:37.880 23:08:06 -- nvmf/common.sh@542 -- # cat 00:25:37.880 23:08:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:37.880 23:08:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:37.880 { 00:25:37.880 "params": { 00:25:37.880 "name": "Nvme$subsystem", 00:25:37.880 "trtype": "$TEST_TRANSPORT", 00:25:37.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:37.880 "adrfam": "ipv4", 00:25:37.880 "trsvcid": "$NVMF_PORT", 00:25:37.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:37.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:37.880 "hdgst": ${hdgst:-false}, 00:25:37.880 "ddgst": ${ddgst:-false} 00:25:37.880 }, 00:25:37.880 "method": "bdev_nvme_attach_controller" 00:25:37.880 } 00:25:37.880 EOF 00:25:37.880 )") 00:25:37.880 23:08:06 -- nvmf/common.sh@542 -- # cat 00:25:38.141 23:08:06 -- nvmf/common.sh@544 -- # jq . 00:25:38.141 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.141 23:08:06 -- nvmf/common.sh@545 -- # IFS=, 00:25:38.141 23:08:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:38.141 "params": { 00:25:38.141 "name": "Nvme1", 00:25:38.141 "trtype": "tcp", 00:25:38.141 "traddr": "10.0.0.2", 00:25:38.141 "adrfam": "ipv4", 00:25:38.141 "trsvcid": "4420", 00:25:38.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:38.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:38.141 "hdgst": false, 00:25:38.141 "ddgst": false 00:25:38.141 }, 00:25:38.141 "method": "bdev_nvme_attach_controller" 00:25:38.141 },{ 00:25:38.141 "params": { 00:25:38.141 "name": "Nvme2", 00:25:38.141 "trtype": "tcp", 00:25:38.141 "traddr": "10.0.0.2", 00:25:38.141 "adrfam": "ipv4", 00:25:38.141 "trsvcid": "4420", 00:25:38.141 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:38.141 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:38.141 "hdgst": false, 00:25:38.141 "ddgst": false 00:25:38.141 }, 00:25:38.141 "method": "bdev_nvme_attach_controller" 00:25:38.141 },{ 00:25:38.141 "params": { 00:25:38.141 "name": "Nvme3", 00:25:38.141 "trtype": "tcp", 00:25:38.141 "traddr": "10.0.0.2", 00:25:38.141 "adrfam": "ipv4", 00:25:38.141 "trsvcid": "4420", 00:25:38.141 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:38.141 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:38.142 "hdgst": false, 00:25:38.142 "ddgst": false 00:25:38.142 }, 00:25:38.142 "method": "bdev_nvme_attach_controller" 00:25:38.142 },{ 00:25:38.142 "params": { 00:25:38.142 "name": "Nvme4", 00:25:38.142 "trtype": "tcp", 00:25:38.142 "traddr": "10.0.0.2", 00:25:38.142 "adrfam": "ipv4", 00:25:38.142 "trsvcid": "4420", 00:25:38.142 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:38.142 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:38.142 "hdgst": false, 00:25:38.142 "ddgst": false 00:25:38.142 }, 00:25:38.142 "method": "bdev_nvme_attach_controller" 00:25:38.142 },{ 00:25:38.142 "params": { 00:25:38.142 "name": "Nvme5", 00:25:38.142 "trtype": "tcp", 00:25:38.142 "traddr": "10.0.0.2", 00:25:38.142 "adrfam": "ipv4", 00:25:38.142 "trsvcid": "4420", 00:25:38.142 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:38.142 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:38.142 "hdgst": false, 00:25:38.142 "ddgst": false 00:25:38.142 }, 00:25:38.142 "method": "bdev_nvme_attach_controller" 00:25:38.142 },{ 00:25:38.142 "params": { 00:25:38.142 "name": "Nvme6", 00:25:38.142 "trtype": "tcp", 00:25:38.142 "traddr": "10.0.0.2", 00:25:38.142 "adrfam": "ipv4", 00:25:38.142 "trsvcid": "4420", 00:25:38.142 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:38.142 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:38.142 "hdgst": false, 00:25:38.142 "ddgst": false 00:25:38.142 }, 00:25:38.142 "method": "bdev_nvme_attach_controller" 00:25:38.142 },{ 00:25:38.142 "params": { 00:25:38.142 "name": "Nvme7", 00:25:38.142 "trtype": "tcp", 00:25:38.142 "traddr": "10.0.0.2", 00:25:38.142 "adrfam": "ipv4", 00:25:38.142 "trsvcid": "4420", 00:25:38.142 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:38.142 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:38.142 "hdgst": false, 00:25:38.142 "ddgst": false 00:25:38.142 }, 00:25:38.142 "method": "bdev_nvme_attach_controller" 00:25:38.142 },{ 00:25:38.142 "params": { 00:25:38.142 "name": "Nvme8", 00:25:38.142 "trtype": "tcp", 00:25:38.142 "traddr": "10.0.0.2", 00:25:38.142 "adrfam": "ipv4", 00:25:38.142 "trsvcid": "4420", 00:25:38.142 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:38.142 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:38.142 "hdgst": false, 00:25:38.142 "ddgst": false 00:25:38.142 }, 00:25:38.142 "method": "bdev_nvme_attach_controller" 00:25:38.142 },{ 00:25:38.142 "params": { 00:25:38.142 "name": "Nvme9", 00:25:38.142 "trtype": "tcp", 00:25:38.142 "traddr": "10.0.0.2", 00:25:38.142 "adrfam": "ipv4", 00:25:38.142 "trsvcid": "4420", 00:25:38.142 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:38.142 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:38.142 "hdgst": false, 00:25:38.142 "ddgst": false 00:25:38.142 }, 00:25:38.142 "method": "bdev_nvme_attach_controller" 00:25:38.142 },{ 00:25:38.142 "params": { 00:25:38.142 "name": "Nvme10", 00:25:38.142 "trtype": "tcp", 00:25:38.142 "traddr": "10.0.0.2", 00:25:38.142 "adrfam": "ipv4", 00:25:38.142 "trsvcid": "4420", 00:25:38.142 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:38.142 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:38.142 "hdgst": false, 00:25:38.142 "ddgst": false 00:25:38.142 }, 00:25:38.142 "method": "bdev_nvme_attach_controller" 00:25:38.142 }' 00:25:38.142 [2024-06-09 23:08:06.093753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.142 [2024-06-09 23:08:06.156707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.528 Running I/O for 10 seconds... 00:25:40.118 23:08:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:40.118 23:08:08 -- common/autotest_common.sh@852 -- # return 0 00:25:40.118 23:08:08 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:40.118 23:08:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.118 23:08:08 -- common/autotest_common.sh@10 -- # set +x 00:25:40.118 23:08:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.118 23:08:08 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:40.118 23:08:08 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:40.118 23:08:08 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:40.118 23:08:08 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:40.118 23:08:08 -- target/shutdown.sh@57 -- # local ret=1 00:25:40.118 23:08:08 -- target/shutdown.sh@58 -- # local i 00:25:40.118 23:08:08 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:40.118 23:08:08 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:40.118 23:08:08 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:40.118 23:08:08 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:40.118 23:08:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.118 23:08:08 -- common/autotest_common.sh@10 -- # set +x 00:25:40.118 23:08:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.118 23:08:08 -- target/shutdown.sh@60 -- # read_io_count=216 00:25:40.118 23:08:08 -- target/shutdown.sh@63 -- # '[' 216 -ge 100 ']' 00:25:40.118 23:08:08 -- target/shutdown.sh@64 -- # ret=0 00:25:40.118 23:08:08 -- target/shutdown.sh@65 -- # break 00:25:40.118 23:08:08 -- target/shutdown.sh@69 -- # return 0 00:25:40.118 23:08:08 -- target/shutdown.sh@134 -- # killprocess 14153 00:25:40.118 23:08:08 -- common/autotest_common.sh@926 -- # '[' -z 14153 ']' 00:25:40.118 23:08:08 -- common/autotest_common.sh@930 -- # kill -0 14153 00:25:40.118 23:08:08 -- common/autotest_common.sh@931 -- # uname 00:25:40.118 23:08:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:40.118 23:08:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 14153 00:25:40.118 23:08:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:40.118 23:08:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:40.118 23:08:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 14153' 00:25:40.118 killing process with pid 14153 00:25:40.118 23:08:08 -- common/autotest_common.sh@945 -- # kill 14153 00:25:40.118 23:08:08 -- common/autotest_common.sh@950 -- # wait 14153 00:25:40.118 [2024-06-09 23:08:08.206601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.118 [2024-06-09 23:08:08.206776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.206880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18338e0 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.119 [2024-06-09 23:08:08.208325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.208330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.208335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.208339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.208343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.208349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.208354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.208358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.208362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.208367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.208371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.208376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.208380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.208385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b7260 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.120 [2024-06-09 23:08:08.209412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.209496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1833d90 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834240 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.121 [2024-06-09 23:08:08.210783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.121 [2024-06-09 23:08:08.210797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.121 [2024-06-09 23:08:08.210806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.121 [2024-06-09 23:08:08.210814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.121 [2024-06-09 23:08:08.210821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.121 [2024-06-09 23:08:08.210833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.121 [2024-06-09 23:08:08.210840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.121 [2024-06-09 23:08:08.210848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a0520 is same with the state(5) to be set 00:25:40.121 [2024-06-09 23:08:08.210889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.121 [2024-06-09 23:08:08.210898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.121 [2024-06-09 23:08:08.210906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.121 [2024-06-09 23:08:08.210914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.121 [2024-06-09 23:08:08.210921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.121 [2024-06-09 23:08:08.210928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.121 [2024-06-09 23:08:08.210936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.121 [2024-06-09 23:08:08.210943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.122 [2024-06-09 23:08:08.210950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d940 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.210975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.122 [2024-06-09 23:08:08.210983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.122 [2024-06-09 23:08:08.210991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.122 [2024-06-09 23:08:08.210998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.122 [2024-06-09 23:08:08.211007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.122 [2024-06-09 23:08:08.211014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.122 [2024-06-09 23:08:08.211022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.122 [2024-06-09 23:08:08.211029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.122 [2024-06-09 23:08:08.211036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79d7c0 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.122 [2024-06-09 23:08:08.211065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.122 [2024-06-09 23:08:08.211072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.122 [2024-06-09 23:08:08.211079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.122 [2024-06-09 23:08:08.211087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.122 [2024-06-09 23:08:08.211096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.122 [2024-06-09 23:08:08.211105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.122 [2024-06-09 23:08:08.211112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.122 [2024-06-09 23:08:08.211119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964020 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18346d0 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.122 [2024-06-09 23:08:08.211887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.211997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.212001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1834b80 is same with the state(5) to be set 00:25:40.123 [2024-06-09 23:08:08.212435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.123 [2024-06-09 23:08:08.212478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.123 [2024-06-09 23:08:08.212496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.123 [2024-06-09 23:08:08.212512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.123 [2024-06-09 23:08:08.212529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.123 [2024-06-09 23:08:08.212546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.123 [2024-06-09 23:08:08.212566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.123 [2024-06-09 23:08:08.212582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.123 [2024-06-09 23:08:08.212599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.123 [2024-06-09 23:08:08.212615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.123 [2024-06-09 23:08:08.212631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.123 [2024-06-09 23:08:08.212647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.123 [2024-06-09 23:08:08.212663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.123 [2024-06-09 23:08:08.212679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.123 [2024-06-09 23:08:08.212695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.123 [2024-06-09 23:08:08.212712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.123 [2024-06-09 23:08:08.212711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.124 [2024-06-09 23:08:08.212725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31872 len:12[2024-06-09 23:08:08.212731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.124 the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-09 23:08:08.212743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.124 the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.124 [2024-06-09 23:08:08.212755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.124 [2024-06-09 23:08:08.212766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.124 [2024-06-09 23:08:08.212777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.124 [2024-06-09 23:08:08.212782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.124 [2024-06-09 23:08:08.212792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with [2024-06-09 23:08:08.212797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:40.124 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.124 [2024-06-09 23:08:08.212804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.124 [2024-06-09 23:08:08.212815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.124 [2024-06-09 23:08:08.212821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.124 [2024-06-09 23:08:08.212831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with [2024-06-09 23:08:08.212836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:40.124 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.124 [2024-06-09 23:08:08.212847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.124 [2024-06-09 23:08:08.212857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-06-09 23:08:08.212862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.124 the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35840 len:12[2024-06-09 23:08:08.212874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.124 the state(5) to be set 00:25:40.124 [2024-06-09 23:08:08.212883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.125 [2024-06-09 23:08:08.212888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.125 [2024-06-09 23:08:08.212898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.125 [2024-06-09 23:08:08.212903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33408 len:12[2024-06-09 23:08:08.212913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.125 the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.125 [2024-06-09 23:08:08.212924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.125 [2024-06-09 23:08:08.212935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with [2024-06-09 23:08:08.212939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:40.125 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.125 [2024-06-09 23:08:08.212947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.125 [2024-06-09 23:08:08.212957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.125 [2024-06-09 23:08:08.212962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.125 [2024-06-09 23:08:08.212973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.125 [2024-06-09 23:08:08.212983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.125 [2024-06-09 23:08:08.212993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.212996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.125 [2024-06-09 23:08:08.212998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.213004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.213007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36224 len:12[2024-06-09 23:08:08.213008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.125 the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.213015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.213015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.125 [2024-06-09 23:08:08.213020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.213025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36352 len:12[2024-06-09 23:08:08.213027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.125 the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.213034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.213035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.125 [2024-06-09 23:08:08.213040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.213045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with [2024-06-09 23:08:08.213045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36480 len:1the state(5) to be set 00:25:40.125 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.125 [2024-06-09 23:08:08.213052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.213055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.125 [2024-06-09 23:08:08.213058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.213063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.213064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.125 [2024-06-09 23:08:08.213067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with the state(5) to be set 00:25:40.125 [2024-06-09 23:08:08.213072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1835030 is same with [2024-06-09 23:08:08.213072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:40.125 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.125 [2024-06-09 23:08:08.213083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.125 [2024-06-09 23:08:08.213091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.125 [2024-06-09 23:08:08.213100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.125 [2024-06-09 23:08:08.213106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.126 [2024-06-09 23:08:08.213557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.126 [2024-06-09 23:08:08.213564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.127 [2024-06-09 23:08:08.213621] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x93aa90 was disconnected and freed. reset controller. 00:25:40.127 [2024-06-09 23:08:08.214087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.127 [2024-06-09 23:08:08.214476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.214483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.214489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.214495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.214501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.214508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6920 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.215328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.128 [2024-06-09 23:08:08.217033] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:40.128 [2024-06-09 23:08:08.217063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a0520 (9): Bad file descriptor 00:25:40.128 [2024-06-09 23:08:08.217313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.128 [2024-06-09 23:08:08.217331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.217687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.217700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x784c70 is same with the state(5) to be set 00:25:40.129 [2024-06-09 23:08:08.217738] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x784c70 was disconnected and freed. reset controller. 00:25:40.129 [2024-06-09 23:08:08.218572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.218587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.218598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.218605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.218615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.218622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.218632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.218639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.218650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.218657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.218666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.218673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.218683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.129 [2024-06-09 23:08:08.218690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.129 [2024-06-09 23:08:08.218699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.218706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.218716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.218723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.218733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.218740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.218749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.218756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.218765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.218777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.218787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.218794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.218803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.218810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.218819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.218826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.218836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.218843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.218852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.218859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.218868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.218875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.218885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.218892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.218901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.218907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.218917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.218924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.218932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x788e10 is same with the state(5) to be set 00:25:40.130 [2024-06-09 23:08:08.218967] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x788e10 was disconnected and freed. reset controller. 00:25:40.130 [2024-06-09 23:08:08.219065] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:40.130 [2024-06-09 23:08:08.219114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.219127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.219137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.219147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.219156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.219164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.219178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.219190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.219199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.219206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.219215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.130 [2024-06-09 23:08:08.219222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.130 [2024-06-09 23:08:08.219231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.219603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.131 [2024-06-09 23:08:08.219612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.131 [2024-06-09 23:08:08.230654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.131 [2024-06-09 23:08:08.230673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.131 [2024-06-09 23:08:08.230680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.131 [2024-06-09 23:08:08.230686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.131 [2024-06-09 23:08:08.230692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.131 [2024-06-09 23:08:08.230696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.131 [2024-06-09 23:08:08.230701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.132 [2024-06-09 23:08:08.230705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.132 [2024-06-09 23:08:08.230710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.132 [2024-06-09 23:08:08.230714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b6db0 is same with the state(5) to be set 00:25:40.132 [2024-06-09 23:08:08.234610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.234987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.234996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.235004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.235013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.235020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.235029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.235036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.235045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.235052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.132 [2024-06-09 23:08:08.235062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.132 [2024-06-09 23:08:08.235069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.235078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.235085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.235094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.235101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.235110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.235117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.235126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.235135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.235144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.235151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.235160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.235167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.235176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.235183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.235192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.235199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.235264] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x76e1d0 was disconnected and freed. reset controller. 00:25:40.133 [2024-06-09 23:08:08.235313] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:40.133 [2024-06-09 23:08:08.236685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.236988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.236996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.237005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.237012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.237021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.237028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.133 [2024-06-09 23:08:08.237037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.133 [2024-06-09 23:08:08.237045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.237056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.134 [2024-06-09 23:08:08.237063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.237073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.134 [2024-06-09 23:08:08.237080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.237089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.134 [2024-06-09 23:08:08.237096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.237106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.134 [2024-06-09 23:08:08.237113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.237122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.134 [2024-06-09 23:08:08.237129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.237138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.134 [2024-06-09 23:08:08.237145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.237154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x786250 is same with the state(5) to be set 00:25:40.134 [2024-06-09 23:08:08.237195] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x786250 was disconnected and freed. reset controller. 00:25:40.134 [2024-06-09 23:08:08.238343] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:40.134 [2024-06-09 23:08:08.238392] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:40.134 [2024-06-09 23:08:08.238433] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x798160 (9): Bad file descriptor 00:25:40.134 [2024-06-09 23:08:08.238451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c1060 (9): Bad file descriptor 00:25:40.134 [2024-06-09 23:08:08.239150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.134 [2024-06-09 23:08:08.239817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.134 [2024-06-09 23:08:08.239853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a0520 with addr=10.0.0.2, port=4420 00:25:40.134 [2024-06-09 23:08:08.239865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a0520 is same with the state(5) to be set 00:25:40.134 [2024-06-09 23:08:08.239904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.239914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.239924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.239931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.239939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.239951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.239959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.239966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.239973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c8d70 is same with the state(5) to be set 00:25:40.134 [2024-06-09 23:08:08.239998] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85d940 (9): Bad file descriptor 00:25:40.134 [2024-06-09 23:08:08.240017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79d7c0 (9): Bad file descriptor 00:25:40.134 [2024-06-09 23:08:08.240030] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964020 (9): Bad file descriptor 00:25:40.134 [2024-06-09 23:08:08.240056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.240065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.240073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.240080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.240087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.240095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.240102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.240109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.240116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876cd0 is same with the state(5) to be set 00:25:40.134 [2024-06-09 23:08:08.240138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.240147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.240155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.240161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.240170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.240177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.240184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.240191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.240198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e5f0 is same with the state(5) to be set 00:25:40.134 [2024-06-09 23:08:08.240222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.240232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.240240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.240247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.240255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.240262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.134 [2024-06-09 23:08:08.240270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.134 [2024-06-09 23:08:08.240277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.135 [2024-06-09 23:08:08.240284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x963590 is same with the state(5) to be set 00:25:40.135 [2024-06-09 23:08:08.240305] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a0520 (9): Bad file descriptor 00:25:40.135 [2024-06-09 23:08:08.242663] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:40.135 [2024-06-09 23:08:08.242793] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.135 [2024-06-09 23:08:08.242812] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:40.135 [2024-06-09 23:08:08.242826] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x963590 (9): Bad file descriptor 00:25:40.135 [2024-06-09 23:08:08.243486] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:40.135 [2024-06-09 23:08:08.244054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.135 [2024-06-09 23:08:08.244698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.135 [2024-06-09 23:08:08.244736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c1060 with addr=10.0.0.2, port=4420 00:25:40.135 [2024-06-09 23:08:08.244748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c1060 is same with the state(5) to be set 00:25:40.135 [2024-06-09 23:08:08.245306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.135 [2024-06-09 23:08:08.245973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.135 [2024-06-09 23:08:08.246011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x798160 with addr=10.0.0.2, port=4420 00:25:40.135 [2024-06-09 23:08:08.246022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798160 is same with the state(5) to be set 00:25:40.135 [2024-06-09 23:08:08.246642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.135 [2024-06-09 23:08:08.247092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.135 [2024-06-09 23:08:08.247104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79d7c0 with addr=10.0.0.2, port=4420 00:25:40.135 [2024-06-09 23:08:08.247113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79d7c0 is same with the state(5) to be set 00:25:40.135 [2024-06-09 23:08:08.247139] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:40.135 [2024-06-09 23:08:08.247146] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:40.135 [2024-06-09 23:08:08.247155] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:40.135 [2024-06-09 23:08:08.247822] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.135 [2024-06-09 23:08:08.248756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.135 [2024-06-09 23:08:08.249188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.135 [2024-06-09 23:08:08.249201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x963590 with addr=10.0.0.2, port=4420 00:25:40.135 [2024-06-09 23:08:08.249210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x963590 is same with the state(5) to be set 00:25:40.135 [2024-06-09 23:08:08.249225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c1060 (9): Bad file descriptor 00:25:40.135 [2024-06-09 23:08:08.249236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x798160 (9): Bad file descriptor 00:25:40.135 [2024-06-09 23:08:08.249245] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79d7c0 (9): Bad file descriptor 00:25:40.135 [2024-06-09 23:08:08.249353] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:40.135 [2024-06-09 23:08:08.249376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x963590 (9): Bad file descriptor 00:25:40.135 [2024-06-09 23:08:08.249385] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:40.135 [2024-06-09 23:08:08.249392] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:40.135 [2024-06-09 23:08:08.249399] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:40.135 [2024-06-09 23:08:08.249418] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:40.135 [2024-06-09 23:08:08.249425] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:40.135 [2024-06-09 23:08:08.249431] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:40.135 [2024-06-09 23:08:08.249442] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.135 [2024-06-09 23:08:08.249448] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.135 [2024-06-09 23:08:08.249455] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.135 [2024-06-09 23:08:08.249470] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c8d70 (9): Bad file descriptor 00:25:40.135 [2024-06-09 23:08:08.249501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x876cd0 (9): Bad file descriptor 00:25:40.135 [2024-06-09 23:08:08.249517] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85e5f0 (9): Bad file descriptor 00:25:40.135 [2024-06-09 23:08:08.249591] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.135 [2024-06-09 23:08:08.249601] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.135 [2024-06-09 23:08:08.249607] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.135 [2024-06-09 23:08:08.249625] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:40.135 [2024-06-09 23:08:08.249631] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:40.135 [2024-06-09 23:08:08.249638] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:40.135 [2024-06-09 23:08:08.249670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.135 [2024-06-09 23:08:08.249680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.135 [2024-06-09 23:08:08.249699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.135 [2024-06-09 23:08:08.249708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.135 [2024-06-09 23:08:08.249717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.135 [2024-06-09 23:08:08.249724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.135 [2024-06-09 23:08:08.249734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.135 [2024-06-09 23:08:08.249741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.135 [2024-06-09 23:08:08.249751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.135 [2024-06-09 23:08:08.249764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.135 [2024-06-09 23:08:08.249774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.135 [2024-06-09 23:08:08.249782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.135 [2024-06-09 23:08:08.249791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.135 [2024-06-09 23:08:08.249798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.135 [2024-06-09 23:08:08.249807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.135 [2024-06-09 23:08:08.249815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.249824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.249831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.249840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.249847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.249857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.249864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.249873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.249880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.249889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.249896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.249905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.249915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.249924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.249931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.249940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.249947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.249957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.249964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.249973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.249980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.249989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.249996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.136 [2024-06-09 23:08:08.250270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.136 [2024-06-09 23:08:08.250277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.137 [2024-06-09 23:08:08.250671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.137 [2024-06-09 23:08:08.250678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.250687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.250694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.250704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.250711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.250720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.250726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.250736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.250742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.250752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92e610 is same with the state(5) to be set 00:25:40.138 [2024-06-09 23:08:08.252022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.138 [2024-06-09 23:08:08.252433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.138 [2024-06-09 23:08:08.252442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.139 [2024-06-09 23:08:08.252924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.139 [2024-06-09 23:08:08.252933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.140 [2024-06-09 23:08:08.252940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.140 [2024-06-09 23:08:08.252949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.140 [2024-06-09 23:08:08.252956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.140 [2024-06-09 23:08:08.252965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.140 [2024-06-09 23:08:08.252972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.140 [2024-06-09 23:08:08.252981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.140 [2024-06-09 23:08:08.252988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.140 [2024-06-09 23:08:08.252997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.140 [2024-06-09 23:08:08.253005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.140 [2024-06-09 23:08:08.253014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.140 [2024-06-09 23:08:08.253020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.140 [2024-06-09 23:08:08.253030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.140 [2024-06-09 23:08:08.253036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.140 [2024-06-09 23:08:08.253045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.140 [2024-06-09 23:08:08.253053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.140 [2024-06-09 23:08:08.253063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.140 [2024-06-09 23:08:08.253070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.140 [2024-06-09 23:08:08.253078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.140 [2024-06-09 23:08:08.253086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.140 [2024-06-09 23:08:08.253093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78cf70 is same with the state(5) to be set 00:25:40.140 [2024-06-09 23:08:08.254933] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:40.140 [2024-06-09 23:08:08.254954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.140 [2024-06-09 23:08:08.254963] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:40.140 [2024-06-09 23:08:08.254973] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:40.140 [2024-06-09 23:08:08.255617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.140 [2024-06-09 23:08:08.256152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.140 [2024-06-09 23:08:08.256162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a0520 with addr=10.0.0.2, port=4420 00:25:40.140 [2024-06-09 23:08:08.256170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a0520 is same with the state(5) to be set 00:25:40.140 [2024-06-09 23:08:08.256451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.140 [2024-06-09 23:08:08.256938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.140 [2024-06-09 23:08:08.256947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964020 with addr=10.0.0.2, port=4420 00:25:40.140 [2024-06-09 23:08:08.256954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964020 is same with the state(5) to be set 00:25:40.140 [2024-06-09 23:08:08.257459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.140 [2024-06-09 23:08:08.257725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.140 [2024-06-09 23:08:08.257735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85d940 with addr=10.0.0.2, port=4420 00:25:40.140 [2024-06-09 23:08:08.257742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d940 is same with the state(5) to be set 00:25:40.140 [2024-06-09 23:08:08.258299] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.140 [2024-06-09 23:08:08.258311] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:40.140 [2024-06-09 23:08:08.258319] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:40.140 [2024-06-09 23:08:08.258344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a0520 (9): Bad file descriptor 00:25:40.140 [2024-06-09 23:08:08.258354] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964020 (9): Bad file descriptor 00:25:40.140 [2024-06-09 23:08:08.258363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85d940 (9): Bad file descriptor 00:25:40.140 [2024-06-09 23:08:08.258931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.140 [2024-06-09 23:08:08.259180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.140 [2024-06-09 23:08:08.259189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79d7c0 with addr=10.0.0.2, port=4420 00:25:40.140 [2024-06-09 23:08:08.259200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79d7c0 is same with the state(5) to be set 00:25:40.140 [2024-06-09 23:08:08.259739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.140 [2024-06-09 23:08:08.259987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.140 [2024-06-09 23:08:08.259997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x798160 with addr=10.0.0.2, port=4420 00:25:40.140 [2024-06-09 23:08:08.260004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798160 is same with the state(5) to be set 00:25:40.140 [2024-06-09 23:08:08.260502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.140 [2024-06-09 23:08:08.261072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.140 [2024-06-09 23:08:08.261081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c1060 with addr=10.0.0.2, port=4420 00:25:40.140 [2024-06-09 23:08:08.261088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c1060 is same with the state(5) to be set 00:25:40.140 [2024-06-09 23:08:08.261095] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:40.140 [2024-06-09 23:08:08.261102] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:40.140 [2024-06-09 23:08:08.261109] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:40.140 [2024-06-09 23:08:08.261120] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:40.140 [2024-06-09 23:08:08.261126] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:40.140 [2024-06-09 23:08:08.261132] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:40.140 [2024-06-09 23:08:08.261143] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:40.140 [2024-06-09 23:08:08.261149] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:40.140 [2024-06-09 23:08:08.261155] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:40.141 [2024-06-09 23:08:08.261196] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:40.141 [2024-06-09 23:08:08.261205] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.141 [2024-06-09 23:08:08.261211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.141 [2024-06-09 23:08:08.261217] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.141 [2024-06-09 23:08:08.261230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79d7c0 (9): Bad file descriptor 00:25:40.141 [2024-06-09 23:08:08.261238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x798160 (9): Bad file descriptor 00:25:40.141 [2024-06-09 23:08:08.261248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c1060 (9): Bad file descriptor 00:25:40.141 [2024-06-09 23:08:08.261709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.141 [2024-06-09 23:08:08.262232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.141 [2024-06-09 23:08:08.262242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x963590 with addr=10.0.0.2, port=4420 00:25:40.141 [2024-06-09 23:08:08.262249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x963590 is same with the state(5) to be set 00:25:40.141 [2024-06-09 23:08:08.262257] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.141 [2024-06-09 23:08:08.262263] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.141 [2024-06-09 23:08:08.262273] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.141 [2024-06-09 23:08:08.262283] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:40.141 [2024-06-09 23:08:08.262289] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:40.141 [2024-06-09 23:08:08.262296] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:40.141 [2024-06-09 23:08:08.262306] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:40.141 [2024-06-09 23:08:08.262312] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:40.141 [2024-06-09 23:08:08.262319] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:40.141 [2024-06-09 23:08:08.262356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.141 [2024-06-09 23:08:08.262706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.141 [2024-06-09 23:08:08.262715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.262985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.262993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.263000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.263010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.263017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.263026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.263032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.263041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.263049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.263058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.263065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.263075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.263082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.263091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.263098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.263108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.263115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.263124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.263130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.263141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.263148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.263157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.263164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.263173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.263180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.263189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.263197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.263206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.142 [2024-06-09 23:08:08.263213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.142 [2024-06-09 23:08:08.263222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.263229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.263238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.263245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.263254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.263261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.263270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.263278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.263287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.263294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.263303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.263310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.263318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.263325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.263334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.263343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.263352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.263359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.263367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.263374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.263383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.263390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.263399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.263410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.263419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x787830 is same with the state(5) to be set 00:25:40.143 [2024-06-09 23:08:08.264671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.264989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.264997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.265006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.265013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.265023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.265031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.265041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.265048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.265057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.265065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.265074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.143 [2024-06-09 23:08:08.265081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.143 [2024-06-09 23:08:08.265090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.144 [2024-06-09 23:08:08.265738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.144 [2024-06-09 23:08:08.265746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78a3f0 is same with the state(5) to be set 00:25:40.145 [2024-06-09 23:08:08.266970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.266984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.266996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.145 [2024-06-09 23:08:08.267649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.145 [2024-06-09 23:08:08.267658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.267983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.267990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.268000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.268007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.268016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.268023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.268032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.146 [2024-06-09 23:08:08.268039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.146 [2024-06-09 23:08:08.268047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x78b990 is same with the state(5) to be set 00:25:40.146 [2024-06-09 23:08:08.269741] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.146 [2024-06-09 23:08:08.269760] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.146 [2024-06-09 23:08:08.269766] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.146 [2024-06-09 23:08:08.269774] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:40.146 [2024-06-09 23:08:08.269785] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:40.411 task offset: 34816 on job bdev=Nvme3n1 fails 00:25:40.411 00:25:40.411 Latency(us) 00:25:40.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.411 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.411 Job: Nvme1n1 ended in about 0.64 seconds with error 00:25:40.411 Verification LBA range: start 0x0 length 0x400 00:25:40.411 Nvme1n1 : 0.64 396.92 24.81 100.01 0.00 127763.51 13434.88 111848.11 00:25:40.411 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.411 Job: Nvme2n1 ended in about 0.65 seconds with error 00:25:40.411 Verification LBA range: start 0x0 length 0x400 00:25:40.411 Nvme2n1 : 0.65 252.18 15.76 98.41 0.00 178922.86 93934.93 174762.67 00:25:40.411 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.411 Job: Nvme3n1 ended in about 0.61 seconds with error 00:25:40.411 Verification LBA range: start 0x0 length 0x400 00:25:40.411 Nvme3n1 : 0.61 408.94 25.56 104.27 0.00 120466.59 2334.72 111848.11 00:25:40.411 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.411 Job: Nvme4n1 ended in about 0.64 seconds with error 00:25:40.411 Verification LBA range: start 0x0 length 0x400 00:25:40.411 Nvme4n1 : 0.64 395.26 24.70 33.07 0.00 139580.64 19770.03 120586.24 00:25:40.411 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.411 Job: Nvme5n1 ended in about 0.64 seconds with error 00:25:40.411 Verification LBA range: start 0x0 length 0x400 00:25:40.411 Nvme5n1 : 0.64 383.79 23.99 40.56 0.00 138958.61 23046.83 118838.61 00:25:40.411 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.411 Job: Nvme6n1 ended in about 0.66 seconds with error 00:25:40.411 Verification LBA range: start 0x0 length 0x400 00:25:40.411 Nvme6n1 : 0.66 313.75 19.61 96.54 0.00 145562.20 86944.43 117090.99 00:25:40.411 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.411 Job: Nvme7n1 ended in about 0.64 seconds with error 00:25:40.411 Verification LBA range: start 0x0 length 0x400 00:25:40.411 Nvme7n1 : 0.64 394.25 24.64 32.98 0.00 134266.68 20316.16 104420.69 00:25:40.411 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.411 Job: Nvme8n1 ended in about 0.67 seconds with error 00:25:40.411 Verification LBA range: start 0x0 length 0x400 00:25:40.411 Nvme8n1 : 0.67 323.18 20.20 96.20 0.00 138831.99 12397.23 111848.11 00:25:40.411 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.411 Job: Nvme9n1 ended in about 0.67 seconds with error 00:25:40.411 Verification LBA range: start 0x0 length 0x400 00:25:40.411 Nvme9n1 : 0.67 245.67 15.35 95.87 0.00 168502.25 92624.21 154664.96 00:25:40.411 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:40.411 Job: Nvme10n1 ended in about 0.65 seconds with error 00:25:40.411 Verification LBA range: start 0x0 length 0x400 00:25:40.411 Nvme10n1 : 0.65 329.43 20.59 98.06 0.00 132501.93 9338.88 113595.73 00:25:40.411 =================================================================================================================== 00:25:40.412 Total : 3443.36 215.21 795.99 0.00 140902.01 2334.72 174762.67 00:25:40.412 [2024-06-09 23:08:08.297525] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:40.412 [2024-06-09 23:08:08.297614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x963590 (9): Bad file descriptor 00:25:40.412 [2024-06-09 23:08:08.297753] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:40.412 [2024-06-09 23:08:08.298396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.298801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.298812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x876cd0 with addr=10.0.0.2, port=4420 00:25:40.412 [2024-06-09 23:08:08.298822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876cd0 is same with the state(5) to be set 00:25:40.412 [2024-06-09 23:08:08.299204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.299743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.299753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85e5f0 with addr=10.0.0.2, port=4420 00:25:40.412 [2024-06-09 23:08:08.299761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e5f0 is same with the state(5) to be set 00:25:40.412 [2024-06-09 23:08:08.299773] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:40.412 [2024-06-09 23:08:08.299780] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:40.412 [2024-06-09 23:08:08.299788] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:40.412 [2024-06-09 23:08:08.299804] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.412 [2024-06-09 23:08:08.299824] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.412 [2024-06-09 23:08:08.299835] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.412 [2024-06-09 23:08:08.299845] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.412 [2024-06-09 23:08:08.299855] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.412 [2024-06-09 23:08:08.299866] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.412 [2024-06-09 23:08:08.299876] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.412 [2024-06-09 23:08:08.300703] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:40.412 [2024-06-09 23:08:08.300716] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:40.412 [2024-06-09 23:08:08.300725] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:40.412 [2024-06-09 23:08:08.300733] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:40.412 [2024-06-09 23:08:08.300742] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:40.412 [2024-06-09 23:08:08.300751] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.412 [2024-06-09 23:08:08.300773] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.412 [2024-06-09 23:08:08.301327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.301822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.301832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c8d70 with addr=10.0.0.2, port=4420 00:25:40.412 [2024-06-09 23:08:08.301839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c8d70 is same with the state(5) to be set 00:25:40.412 [2024-06-09 23:08:08.301849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x876cd0 (9): Bad file descriptor 00:25:40.412 [2024-06-09 23:08:08.301859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85e5f0 (9): Bad file descriptor 00:25:40.412 [2024-06-09 23:08:08.302426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.302931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.302941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85d940 with addr=10.0.0.2, port=4420 00:25:40.412 [2024-06-09 23:08:08.302948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d940 is same with the state(5) to be set 00:25:40.412 [2024-06-09 23:08:08.303461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.303938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.303947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964020 with addr=10.0.0.2, port=4420 00:25:40.412 [2024-06-09 23:08:08.303954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964020 is same with the state(5) to be set 00:25:40.412 [2024-06-09 23:08:08.304238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.304750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.304761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a0520 with addr=10.0.0.2, port=4420 00:25:40.412 [2024-06-09 23:08:08.304769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a0520 is same with the state(5) to be set 00:25:40.412 [2024-06-09 23:08:08.305160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.305405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.305415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c1060 with addr=10.0.0.2, port=4420 00:25:40.412 [2024-06-09 23:08:08.305422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c1060 is same with the state(5) to be set 00:25:40.412 [2024-06-09 23:08:08.305956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.306591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.306632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x798160 with addr=10.0.0.2, port=4420 00:25:40.412 [2024-06-09 23:08:08.306644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798160 is same with the state(5) to be set 00:25:40.412 [2024-06-09 23:08:08.307174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.307754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-06-09 23:08:08.307791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79d7c0 with addr=10.0.0.2, port=4420 00:25:40.412 [2024-06-09 23:08:08.307802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79d7c0 is same with the state(5) to be set 00:25:40.412 [2024-06-09 23:08:08.307818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c8d70 (9): Bad file descriptor 00:25:40.412 [2024-06-09 23:08:08.307829] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:40.412 [2024-06-09 23:08:08.307836] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:40.412 [2024-06-09 23:08:08.307845] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:40.412 [2024-06-09 23:08:08.307861] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:40.413 [2024-06-09 23:08:08.307868] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:40.413 [2024-06-09 23:08:08.307874] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:40.413 [2024-06-09 23:08:08.307954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.413 [2024-06-09 23:08:08.307965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.413 [2024-06-09 23:08:08.307973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85d940 (9): Bad file descriptor 00:25:40.413 [2024-06-09 23:08:08.307982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964020 (9): Bad file descriptor 00:25:40.413 [2024-06-09 23:08:08.307992] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a0520 (9): Bad file descriptor 00:25:40.413 [2024-06-09 23:08:08.308001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c1060 (9): Bad file descriptor 00:25:40.413 [2024-06-09 23:08:08.308010] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x798160 (9): Bad file descriptor 00:25:40.413 [2024-06-09 23:08:08.308020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79d7c0 (9): Bad file descriptor 00:25:40.413 [2024-06-09 23:08:08.308032] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:40.413 [2024-06-09 23:08:08.308039] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:40.413 [2024-06-09 23:08:08.308045] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:40.413 [2024-06-09 23:08:08.308087] bdev_nvme.c:2861:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:40.413 [2024-06-09 23:08:08.308119] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.413 [2024-06-09 23:08:08.308135] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:40.413 [2024-06-09 23:08:08.308142] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:40.413 [2024-06-09 23:08:08.308149] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:40.413 [2024-06-09 23:08:08.308158] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:40.413 [2024-06-09 23:08:08.308165] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:40.413 [2024-06-09 23:08:08.308171] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:40.413 [2024-06-09 23:08:08.308180] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:40.413 [2024-06-09 23:08:08.308187] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:40.413 [2024-06-09 23:08:08.308194] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:40.413 [2024-06-09 23:08:08.308203] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:40.413 [2024-06-09 23:08:08.308209] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:40.413 [2024-06-09 23:08:08.308216] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:40.413 [2024-06-09 23:08:08.308225] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:40.413 [2024-06-09 23:08:08.308232] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:40.413 [2024-06-09 23:08:08.308238] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:40.413 [2024-06-09 23:08:08.308247] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.413 [2024-06-09 23:08:08.308254] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.413 [2024-06-09 23:08:08.308260] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.413 [2024-06-09 23:08:08.308286] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:40.413 [2024-06-09 23:08:08.308297] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:40.413 [2024-06-09 23:08:08.308306] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:40.413 [2024-06-09 23:08:08.308314] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.413 [2024-06-09 23:08:08.308320] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.413 [2024-06-09 23:08:08.308326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.413 [2024-06-09 23:08:08.308360] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.413 [2024-06-09 23:08:08.308369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.413 [2024-06-09 23:08:08.308375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.413 [2024-06-09 23:08:08.308917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-06-09 23:08:08.309591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-06-09 23:08:08.309627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85e5f0 with addr=10.0.0.2, port=4420 00:25:40.413 [2024-06-09 23:08:08.309639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e5f0 is same with the state(5) to be set 00:25:40.413 [2024-06-09 23:08:08.310195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-06-09 23:08:08.310806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-06-09 23:08:08.310844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x876cd0 with addr=10.0.0.2, port=4420 00:25:40.413 [2024-06-09 23:08:08.310856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876cd0 is same with the state(5) to be set 00:25:40.413 [2024-06-09 23:08:08.311361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-06-09 23:08:08.311933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-06-09 23:08:08.311970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x963590 with addr=10.0.0.2, port=4420 00:25:40.413 [2024-06-09 23:08:08.311981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x963590 is same with the state(5) to be set 00:25:40.413 [2024-06-09 23:08:08.312025] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85e5f0 (9): Bad file descriptor 00:25:40.413 [2024-06-09 23:08:08.312037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x876cd0 (9): Bad file descriptor 00:25:40.413 [2024-06-09 23:08:08.312046] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x963590 (9): Bad file descriptor 00:25:40.413 [2024-06-09 23:08:08.312139] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:40.413 [2024-06-09 23:08:08.312149] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:40.413 [2024-06-09 23:08:08.312156] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:40.413 [2024-06-09 23:08:08.312166] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:40.413 [2024-06-09 23:08:08.312172] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:40.413 [2024-06-09 23:08:08.312179] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:40.413 [2024-06-09 23:08:08.312187] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:40.413 [2024-06-09 23:08:08.312194] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:40.414 [2024-06-09 23:08:08.312200] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:40.414 [2024-06-09 23:08:08.312227] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.414 [2024-06-09 23:08:08.312237] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:40.414 [2024-06-09 23:08:08.312246] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:40.414 [2024-06-09 23:08:08.312254] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:40.414 [2024-06-09 23:08:08.312262] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:40.414 [2024-06-09 23:08:08.312271] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:40.414 [2024-06-09 23:08:08.312327] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.414 [2024-06-09 23:08:08.312335] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.414 [2024-06-09 23:08:08.312341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.414 [2024-06-09 23:08:08.312889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-06-09 23:08:08.313596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-06-09 23:08:08.313633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79d7c0 with addr=10.0.0.2, port=4420 00:25:40.414 [2024-06-09 23:08:08.313644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79d7c0 is same with the state(5) to be set 00:25:40.414 [2024-06-09 23:08:08.314171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-06-09 23:08:08.314807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-06-09 23:08:08.314846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x798160 with addr=10.0.0.2, port=4420 00:25:40.414 [2024-06-09 23:08:08.314856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798160 is same with the state(5) to be set 00:25:40.414 [2024-06-09 23:08:08.315348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-06-09 23:08:08.315768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-06-09 23:08:08.315779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c1060 with addr=10.0.0.2, port=4420 00:25:40.414 [2024-06-09 23:08:08.315786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c1060 is same with the state(5) to be set 00:25:40.414 [2024-06-09 23:08:08.316023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-06-09 23:08:08.316519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-06-09 23:08:08.316529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a0520 with addr=10.0.0.2, port=4420 00:25:40.414 [2024-06-09 23:08:08.316536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a0520 is same with the state(5) to be set 00:25:40.414 [2024-06-09 23:08:08.316801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-06-09 23:08:08.317147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-06-09 23:08:08.317157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964020 with addr=10.0.0.2, port=4420 00:25:40.414 [2024-06-09 23:08:08.317164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964020 is same with the state(5) to be set 00:25:40.414 [2024-06-09 23:08:08.317664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-06-09 23:08:08.318199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-06-09 23:08:08.318208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85d940 with addr=10.0.0.2, port=4420 00:25:40.414 [2024-06-09 23:08:08.318215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d940 is same with the state(5) to be set 00:25:40.414 [2024-06-09 23:08:08.318264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79d7c0 (9): Bad file descriptor 00:25:40.414 [2024-06-09 23:08:08.318277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x798160 (9): Bad file descriptor 00:25:40.414 [2024-06-09 23:08:08.318286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c1060 (9): Bad file descriptor 00:25:40.414 [2024-06-09 23:08:08.318295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a0520 (9): Bad file descriptor 00:25:40.414 [2024-06-09 23:08:08.318309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964020 (9): Bad file descriptor 00:25:40.414 [2024-06-09 23:08:08.318318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85d940 (9): Bad file descriptor 00:25:40.414 [2024-06-09 23:08:08.318380] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.414 [2024-06-09 23:08:08.318389] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.414 [2024-06-09 23:08:08.318397] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.414 [2024-06-09 23:08:08.318412] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:40.414 [2024-06-09 23:08:08.318418] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:40.414 [2024-06-09 23:08:08.318425] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:40.414 [2024-06-09 23:08:08.318434] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:40.414 [2024-06-09 23:08:08.318440] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:40.414 [2024-06-09 23:08:08.318447] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:40.414 [2024-06-09 23:08:08.318456] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:40.414 [2024-06-09 23:08:08.318462] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:40.414 [2024-06-09 23:08:08.318469] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:40.414 [2024-06-09 23:08:08.318478] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:40.414 [2024-06-09 23:08:08.318484] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:40.414 [2024-06-09 23:08:08.318490] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:40.414 [2024-06-09 23:08:08.318499] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:40.414 [2024-06-09 23:08:08.318505] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:40.414 [2024-06-09 23:08:08.318512] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:40.414 [2024-06-09 23:08:08.318546] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:40.414 [2024-06-09 23:08:08.318558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.414 [2024-06-09 23:08:08.318565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.414 [2024-06-09 23:08:08.318571] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.414 [2024-06-09 23:08:08.318577] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.414 [2024-06-09 23:08:08.318583] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.414 [2024-06-09 23:08:08.318620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.414 [2024-06-09 23:08:08.319172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-06-09 23:08:08.319811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.319848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c8d70 with addr=10.0.0.2, port=4420 00:25:40.415 [2024-06-09 23:08:08.319859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c8d70 is same with the state(5) to be set 00:25:40.415 [2024-06-09 23:08:08.319889] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:40.415 [2024-06-09 23:08:08.319913] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:40.415 [2024-06-09 23:08:08.319923] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:40.415 [2024-06-09 23:08:08.319953] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c8d70 (9): Bad file descriptor 00:25:40.415 [2024-06-09 23:08:08.320621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.321154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.321166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x963590 with addr=10.0.0.2, port=4420 00:25:40.415 [2024-06-09 23:08:08.321175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x963590 is same with the state(5) to be set 00:25:40.415 [2024-06-09 23:08:08.321774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.322263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.322275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x876cd0 with addr=10.0.0.2, port=4420 00:25:40.415 [2024-06-09 23:08:08.322284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876cd0 is same with the state(5) to be set 00:25:40.415 [2024-06-09 23:08:08.322665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.323203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.323212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85e5f0 with addr=10.0.0.2, port=4420 00:25:40.415 [2024-06-09 23:08:08.323219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e5f0 is same with the state(5) to be set 00:25:40.415 [2024-06-09 23:08:08.323226] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:40.415 [2024-06-09 23:08:08.323233] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:40.415 [2024-06-09 23:08:08.323241] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:40.415 [2024-06-09 23:08:08.323284] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.415 [2024-06-09 23:08:08.323298] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x963590 (9): Bad file descriptor 00:25:40.415 [2024-06-09 23:08:08.323308] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x876cd0 (9): Bad file descriptor 00:25:40.415 [2024-06-09 23:08:08.323317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85e5f0 (9): Bad file descriptor 00:25:40.415 [2024-06-09 23:08:08.323421] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:40.415 [2024-06-09 23:08:08.323431] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:40.415 [2024-06-09 23:08:08.323438] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:40.415 [2024-06-09 23:08:08.323447] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:40.415 [2024-06-09 23:08:08.323453] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:40.415 [2024-06-09 23:08:08.323460] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:40.415 [2024-06-09 23:08:08.323469] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:40.415 [2024-06-09 23:08:08.323475] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:40.415 [2024-06-09 23:08:08.323487] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:40.415 [2024-06-09 23:08:08.323512] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:40.415 [2024-06-09 23:08:08.323523] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:40.415 [2024-06-09 23:08:08.323532] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:40.415 [2024-06-09 23:08:08.323541] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:40.415 [2024-06-09 23:08:08.323549] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:40.415 [2024-06-09 23:08:08.323557] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.415 [2024-06-09 23:08:08.323608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.415 [2024-06-09 23:08:08.323615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.415 [2024-06-09 23:08:08.323621] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.415 [2024-06-09 23:08:08.324158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.324653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.324689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85d940 with addr=10.0.0.2, port=4420 00:25:40.415 [2024-06-09 23:08:08.324700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85d940 is same with the state(5) to be set 00:25:40.415 [2024-06-09 23:08:08.325204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.325798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.325836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x964020 with addr=10.0.0.2, port=4420 00:25:40.415 [2024-06-09 23:08:08.325848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x964020 is same with the state(5) to be set 00:25:40.415 [2024-06-09 23:08:08.326399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.327065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.327104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a0520 with addr=10.0.0.2, port=4420 00:25:40.415 [2024-06-09 23:08:08.327114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a0520 is same with the state(5) to be set 00:25:40.415 [2024-06-09 23:08:08.327727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.328259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.328271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c1060 with addr=10.0.0.2, port=4420 00:25:40.415 [2024-06-09 23:08:08.328280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c1060 is same with the state(5) to be set 00:25:40.415 [2024-06-09 23:08:08.328893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.329375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.329388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x798160 with addr=10.0.0.2, port=4420 00:25:40.415 [2024-06-09 23:08:08.329398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x798160 is same with the state(5) to be set 00:25:40.415 [2024-06-09 23:08:08.329901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.330624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-06-09 23:08:08.330661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79d7c0 with addr=10.0.0.2, port=4420 00:25:40.415 [2024-06-09 23:08:08.330677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79d7c0 is same with the state(5) to be set 00:25:40.415 [2024-06-09 23:08:08.330724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85d940 (9): Bad file descriptor 00:25:40.416 [2024-06-09 23:08:08.330737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x964020 (9): Bad file descriptor 00:25:40.416 [2024-06-09 23:08:08.330746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a0520 (9): Bad file descriptor 00:25:40.416 [2024-06-09 23:08:08.330755] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c1060 (9): Bad file descriptor 00:25:40.416 [2024-06-09 23:08:08.330764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x798160 (9): Bad file descriptor 00:25:40.416 [2024-06-09 23:08:08.330773] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79d7c0 (9): Bad file descriptor 00:25:40.416 [2024-06-09 23:08:08.330873] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:40.416 [2024-06-09 23:08:08.330882] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:40.416 [2024-06-09 23:08:08.330890] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:40.416 [2024-06-09 23:08:08.330900] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:40.416 [2024-06-09 23:08:08.330907] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:40.416 [2024-06-09 23:08:08.330913] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:40.416 [2024-06-09 23:08:08.330923] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:40.416 [2024-06-09 23:08:08.330929] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:40.416 [2024-06-09 23:08:08.330935] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:40.416 [2024-06-09 23:08:08.330944] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:40.416 [2024-06-09 23:08:08.330951] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:40.416 [2024-06-09 23:08:08.330957] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:40.416 [2024-06-09 23:08:08.330966] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:40.416 [2024-06-09 23:08:08.330972] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:40.416 [2024-06-09 23:08:08.330978] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:40.416 [2024-06-09 23:08:08.330988] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.416 [2024-06-09 23:08:08.330994] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.416 [2024-06-09 23:08:08.331000] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.416 [2024-06-09 23:08:08.331048] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:40.416 [2024-06-09 23:08:08.331059] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:40.416 [2024-06-09 23:08:08.331068] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:40.416 [2024-06-09 23:08:08.331077] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:40.416 [2024-06-09 23:08:08.331087] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.416 [2024-06-09 23:08:08.331098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.416 [2024-06-09 23:08:08.331139] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.416 [2024-06-09 23:08:08.331148] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.416 [2024-06-09 23:08:08.331156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.416 [2024-06-09 23:08:08.331164] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.416 [2024-06-09 23:08:08.331805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-06-09 23:08:08.332373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-06-09 23:08:08.332386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c8d70 with addr=10.0.0.2, port=4420 00:25:40.416 [2024-06-09 23:08:08.332396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c8d70 is same with the state(5) to be set 00:25:40.416 [2024-06-09 23:08:08.332889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-06-09 23:08:08.333627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-06-09 23:08:08.333665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x85e5f0 with addr=10.0.0.2, port=4420 00:25:40.416 [2024-06-09 23:08:08.333675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85e5f0 is same with the state(5) to be set 00:25:40.416 [2024-06-09 23:08:08.334201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-06-09 23:08:08.334871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-06-09 23:08:08.334909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x876cd0 with addr=10.0.0.2, port=4420 00:25:40.416 [2024-06-09 23:08:08.334920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x876cd0 is same with the state(5) to be set 00:25:40.416 [2024-06-09 23:08:08.335180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-06-09 23:08:08.335410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-06-09 23:08:08.335420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x963590 with addr=10.0.0.2, port=4420 00:25:40.416 [2024-06-09 23:08:08.335427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x963590 is same with the state(5) to be set 00:25:40.416 [2024-06-09 23:08:08.335491] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c8d70 (9): Bad file descriptor 00:25:40.416 [2024-06-09 23:08:08.335503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85e5f0 (9): Bad file descriptor 00:25:40.416 [2024-06-09 23:08:08.335511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x876cd0 (9): Bad file descriptor 00:25:40.416 [2024-06-09 23:08:08.335520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x963590 (9): Bad file descriptor 00:25:40.416 [2024-06-09 23:08:08.335598] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:40.416 [2024-06-09 23:08:08.335607] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:40.416 [2024-06-09 23:08:08.335615] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:40.416 [2024-06-09 23:08:08.335627] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:40.416 [2024-06-09 23:08:08.335634] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:40.416 [2024-06-09 23:08:08.335640] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:40.416 [2024-06-09 23:08:08.335654] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:40.416 [2024-06-09 23:08:08.335660] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:40.416 [2024-06-09 23:08:08.335667] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:40.416 [2024-06-09 23:08:08.335676] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:40.416 [2024-06-09 23:08:08.335683] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:40.416 [2024-06-09 23:08:08.335689] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:40.416 [2024-06-09 23:08:08.335709] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.416 [2024-06-09 23:08:08.335719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.416 [2024-06-09 23:08:08.335728] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.416 [2024-06-09 23:08:08.335736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.416 23:08:08 -- target/shutdown.sh@135 -- # nvmfpid= 00:25:40.417 23:08:08 -- target/shutdown.sh@138 -- # sleep 1 00:25:41.362 23:08:09 -- target/shutdown.sh@141 -- # kill -9 14382 00:25:41.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (14382) - No such process 00:25:41.362 23:08:09 -- target/shutdown.sh@141 -- # true 00:25:41.362 23:08:09 -- target/shutdown.sh@143 -- # stoptarget 00:25:41.362 23:08:09 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:41.362 23:08:09 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:41.362 23:08:09 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:41.362 23:08:09 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:41.362 23:08:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:41.362 23:08:09 -- nvmf/common.sh@116 -- # sync 00:25:41.362 23:08:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:41.362 23:08:09 -- nvmf/common.sh@119 -- # set +e 00:25:41.362 23:08:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:41.362 23:08:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:41.362 rmmod nvme_tcp 00:25:41.623 rmmod nvme_fabrics 00:25:41.623 rmmod nvme_keyring 00:25:41.623 23:08:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:41.623 23:08:09 -- nvmf/common.sh@123 -- # set -e 00:25:41.623 23:08:09 -- nvmf/common.sh@124 -- # return 0 00:25:41.623 23:08:09 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:25:41.623 23:08:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:41.623 23:08:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:41.623 23:08:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:41.623 23:08:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:41.623 23:08:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:41.623 23:08:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.623 23:08:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:41.623 23:08:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.580 23:08:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:43.580 00:25:43.580 real 0m7.463s 00:25:43.580 user 0m17.346s 00:25:43.580 sys 0m1.233s 00:25:43.580 23:08:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.580 23:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:43.580 ************************************ 00:25:43.580 END TEST nvmf_shutdown_tc3 00:25:43.580 ************************************ 00:25:43.580 23:08:11 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:25:43.580 00:25:43.580 real 0m31.704s 00:25:43.580 user 1m13.197s 00:25:43.580 sys 0m9.103s 00:25:43.580 23:08:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.580 23:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:43.580 ************************************ 00:25:43.580 END TEST nvmf_shutdown 00:25:43.580 ************************************ 00:25:43.580 23:08:11 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:25:43.580 23:08:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:43.580 23:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:43.843 23:08:11 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:25:43.843 23:08:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:43.843 23:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:43.843 23:08:11 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:25:43.843 23:08:11 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:43.843 23:08:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:43.843 23:08:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:43.843 23:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:43.843 ************************************ 00:25:43.843 START TEST nvmf_multicontroller 00:25:43.843 ************************************ 00:25:43.843 23:08:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:43.843 * Looking for test storage... 00:25:43.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:43.843 23:08:11 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:43.843 23:08:11 -- nvmf/common.sh@7 -- # uname -s 00:25:43.843 23:08:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:43.843 23:08:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:43.843 23:08:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:43.843 23:08:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:43.843 23:08:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:43.843 23:08:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:43.843 23:08:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:43.843 23:08:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:43.843 23:08:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:43.843 23:08:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:43.843 23:08:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:43.843 23:08:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:43.843 23:08:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:43.843 23:08:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:43.843 23:08:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:43.843 23:08:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:43.843 23:08:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:43.843 23:08:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:43.843 23:08:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:43.843 23:08:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.843 23:08:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.843 23:08:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.843 23:08:11 -- paths/export.sh@5 -- # export PATH 00:25:43.843 23:08:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:43.843 23:08:11 -- nvmf/common.sh@46 -- # : 0 00:25:43.843 23:08:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:43.843 23:08:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:43.843 23:08:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:43.843 23:08:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:43.843 23:08:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:43.843 23:08:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:43.843 23:08:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:43.843 23:08:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:43.843 23:08:11 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:43.843 23:08:11 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:43.843 23:08:11 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:43.843 23:08:11 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:43.843 23:08:11 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:43.843 23:08:11 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:43.843 23:08:11 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:43.843 23:08:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:43.843 23:08:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:43.843 23:08:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:43.843 23:08:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:43.843 23:08:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:43.843 23:08:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.843 23:08:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.843 23:08:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:43.843 23:08:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:43.843 23:08:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:43.843 23:08:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:43.843 23:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:52.002 23:08:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:52.002 23:08:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:52.002 23:08:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:52.002 23:08:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:52.002 23:08:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:52.002 23:08:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:52.002 23:08:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:52.002 23:08:18 -- nvmf/common.sh@294 -- # net_devs=() 00:25:52.002 23:08:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:52.002 23:08:18 -- nvmf/common.sh@295 -- # e810=() 00:25:52.002 23:08:18 -- nvmf/common.sh@295 -- # local -ga e810 00:25:52.003 23:08:18 -- nvmf/common.sh@296 -- # x722=() 00:25:52.003 23:08:18 -- nvmf/common.sh@296 -- # local -ga x722 00:25:52.003 23:08:18 -- nvmf/common.sh@297 -- # mlx=() 00:25:52.003 23:08:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:52.003 23:08:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.003 23:08:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.003 23:08:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.003 23:08:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.003 23:08:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.003 23:08:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.003 23:08:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.003 23:08:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.003 23:08:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.003 23:08:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.003 23:08:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.003 23:08:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:52.003 23:08:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:52.003 23:08:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:52.003 23:08:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:52.003 23:08:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:52.003 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:52.003 23:08:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:52.003 23:08:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:52.003 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:52.003 23:08:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:52.003 23:08:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:52.003 23:08:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.003 23:08:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:52.003 23:08:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.003 23:08:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:52.003 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:52.003 23:08:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.003 23:08:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:52.003 23:08:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.003 23:08:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:52.003 23:08:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.003 23:08:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:52.003 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:52.003 23:08:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.003 23:08:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:52.003 23:08:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:52.003 23:08:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:52.003 23:08:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:52.003 23:08:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.003 23:08:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.003 23:08:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.003 23:08:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:52.003 23:08:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.003 23:08:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.003 23:08:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:52.003 23:08:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.003 23:08:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.003 23:08:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:52.003 23:08:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:52.003 23:08:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.003 23:08:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.003 23:08:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.003 23:08:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.003 23:08:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:52.003 23:08:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.003 23:08:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.003 23:08:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.003 23:08:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:52.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:25:52.003 00:25:52.003 --- 10.0.0.2 ping statistics --- 00:25:52.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.003 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:25:52.003 23:08:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.475 ms 00:25:52.003 00:25:52.003 --- 10.0.0.1 ping statistics --- 00:25:52.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.003 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:25:52.003 23:08:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.003 23:08:19 -- nvmf/common.sh@410 -- # return 0 00:25:52.003 23:08:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:52.003 23:08:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.003 23:08:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:52.003 23:08:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:52.003 23:08:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.003 23:08:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:52.003 23:08:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:52.003 23:08:19 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:52.003 23:08:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:52.003 23:08:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:52.003 23:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:52.003 23:08:19 -- nvmf/common.sh@469 -- # nvmfpid=19419 00:25:52.003 23:08:19 -- nvmf/common.sh@470 -- # waitforlisten 19419 00:25:52.003 23:08:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:52.003 23:08:19 -- common/autotest_common.sh@819 -- # '[' -z 19419 ']' 00:25:52.003 23:08:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.003 23:08:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:52.003 23:08:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.003 23:08:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:52.003 23:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:52.003 [2024-06-09 23:08:19.140655] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:52.003 [2024-06-09 23:08:19.140718] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.003 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.003 [2024-06-09 23:08:19.209332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:52.003 [2024-06-09 23:08:19.271412] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:52.003 [2024-06-09 23:08:19.271529] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.003 [2024-06-09 23:08:19.271538] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.003 [2024-06-09 23:08:19.271544] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.003 [2024-06-09 23:08:19.271653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.003 [2024-06-09 23:08:19.271789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.003 [2024-06-09 23:08:19.271790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:52.003 23:08:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:52.003 23:08:19 -- common/autotest_common.sh@852 -- # return 0 00:25:52.003 23:08:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:52.003 23:08:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:52.003 23:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:52.003 23:08:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.003 23:08:19 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.003 23:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.003 23:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:52.003 [2024-06-09 23:08:19.958898] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.003 23:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.003 23:08:19 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:52.003 23:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.003 23:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:52.003 Malloc0 00:25:52.003 23:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.003 23:08:19 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:52.003 23:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.003 23:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:52.004 23:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.004 23:08:20 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:52.004 23:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.004 23:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:52.004 23:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.004 23:08:20 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.004 23:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.004 23:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:52.004 [2024-06-09 23:08:20.025829] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.004 23:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.004 23:08:20 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:52.004 23:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.004 23:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:52.004 [2024-06-09 23:08:20.037773] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:52.004 23:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.004 23:08:20 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:52.004 23:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.004 23:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:52.004 Malloc1 00:25:52.004 23:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.004 23:08:20 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:52.004 23:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.004 23:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:52.004 23:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.004 23:08:20 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:52.004 23:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.004 23:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:52.004 23:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.004 23:08:20 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:52.004 23:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.004 23:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:52.004 23:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.004 23:08:20 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:52.004 23:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.004 23:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:52.004 23:08:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.004 23:08:20 -- host/multicontroller.sh@44 -- # bdevperf_pid=19494 00:25:52.004 23:08:20 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:52.004 23:08:20 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:52.004 23:08:20 -- host/multicontroller.sh@47 -- # waitforlisten 19494 /var/tmp/bdevperf.sock 00:25:52.004 23:08:20 -- common/autotest_common.sh@819 -- # '[' -z 19494 ']' 00:25:52.004 23:08:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:52.004 23:08:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:52.004 23:08:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:52.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:52.004 23:08:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:52.004 23:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:52.947 23:08:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:52.947 23:08:20 -- common/autotest_common.sh@852 -- # return 0 00:25:52.947 23:08:20 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:52.947 23:08:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.947 23:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:52.947 NVMe0n1 00:25:52.947 23:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.947 23:08:21 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:52.947 23:08:21 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:52.947 23:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.947 23:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:52.947 23:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.947 1 00:25:52.947 23:08:21 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:52.947 23:08:21 -- common/autotest_common.sh@640 -- # local es=0 00:25:52.947 23:08:21 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:52.947 23:08:21 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:52.947 23:08:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:52.947 23:08:21 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:52.947 23:08:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:52.947 23:08:21 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:52.947 23:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.947 23:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:52.947 request: 00:25:52.947 { 00:25:52.947 "name": "NVMe0", 00:25:52.947 "trtype": "tcp", 00:25:52.947 "traddr": "10.0.0.2", 00:25:52.947 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:52.947 "hostaddr": "10.0.0.2", 00:25:52.947 "hostsvcid": "60000", 00:25:52.947 "adrfam": "ipv4", 00:25:52.947 "trsvcid": "4420", 00:25:52.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.947 "method": "bdev_nvme_attach_controller", 00:25:52.947 "req_id": 1 00:25:52.947 } 00:25:52.947 Got JSON-RPC error response 00:25:52.947 response: 00:25:52.947 { 00:25:52.947 "code": -114, 00:25:52.947 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:52.947 } 00:25:52.947 23:08:21 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:52.947 23:08:21 -- common/autotest_common.sh@643 -- # es=1 00:25:52.947 23:08:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:52.947 23:08:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:52.947 23:08:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:52.947 23:08:21 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:52.947 23:08:21 -- common/autotest_common.sh@640 -- # local es=0 00:25:52.947 23:08:21 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:52.947 23:08:21 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:52.947 23:08:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:52.947 23:08:21 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:52.947 23:08:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:52.947 23:08:21 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:52.947 23:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.947 23:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:53.209 request: 00:25:53.209 { 00:25:53.209 "name": "NVMe0", 00:25:53.209 "trtype": "tcp", 00:25:53.209 "traddr": "10.0.0.2", 00:25:53.209 "hostaddr": "10.0.0.2", 00:25:53.209 "hostsvcid": "60000", 00:25:53.209 "adrfam": "ipv4", 00:25:53.209 "trsvcid": "4420", 00:25:53.209 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:53.209 "method": "bdev_nvme_attach_controller", 00:25:53.209 "req_id": 1 00:25:53.209 } 00:25:53.209 Got JSON-RPC error response 00:25:53.209 response: 00:25:53.209 { 00:25:53.209 "code": -114, 00:25:53.209 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:53.209 } 00:25:53.209 23:08:21 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:53.209 23:08:21 -- common/autotest_common.sh@643 -- # es=1 00:25:53.209 23:08:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:53.209 23:08:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:53.209 23:08:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:53.209 23:08:21 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:53.209 23:08:21 -- common/autotest_common.sh@640 -- # local es=0 00:25:53.209 23:08:21 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:53.209 23:08:21 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:53.209 23:08:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:53.209 23:08:21 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:53.209 23:08:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:53.209 23:08:21 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:53.209 23:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.209 23:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:53.209 request: 00:25:53.209 { 00:25:53.209 "name": "NVMe0", 00:25:53.209 "trtype": "tcp", 00:25:53.209 "traddr": "10.0.0.2", 00:25:53.209 "hostaddr": "10.0.0.2", 00:25:53.209 "hostsvcid": "60000", 00:25:53.209 "adrfam": "ipv4", 00:25:53.209 "trsvcid": "4420", 00:25:53.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.209 "multipath": "disable", 00:25:53.209 "method": "bdev_nvme_attach_controller", 00:25:53.209 "req_id": 1 00:25:53.209 } 00:25:53.209 Got JSON-RPC error response 00:25:53.209 response: 00:25:53.209 { 00:25:53.209 "code": -114, 00:25:53.209 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:53.209 } 00:25:53.209 23:08:21 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:53.209 23:08:21 -- common/autotest_common.sh@643 -- # es=1 00:25:53.209 23:08:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:53.209 23:08:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:53.209 23:08:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:53.209 23:08:21 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:53.209 23:08:21 -- common/autotest_common.sh@640 -- # local es=0 00:25:53.209 23:08:21 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:53.209 23:08:21 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:53.209 23:08:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:53.209 23:08:21 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:53.209 23:08:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:53.209 23:08:21 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:53.209 23:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.209 23:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:53.209 request: 00:25:53.209 { 00:25:53.209 "name": "NVMe0", 00:25:53.209 "trtype": "tcp", 00:25:53.209 "traddr": "10.0.0.2", 00:25:53.209 "hostaddr": "10.0.0.2", 00:25:53.209 "hostsvcid": "60000", 00:25:53.209 "adrfam": "ipv4", 00:25:53.209 "trsvcid": "4420", 00:25:53.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.209 "multipath": "failover", 00:25:53.209 "method": "bdev_nvme_attach_controller", 00:25:53.209 "req_id": 1 00:25:53.209 } 00:25:53.209 Got JSON-RPC error response 00:25:53.209 response: 00:25:53.209 { 00:25:53.209 "code": -114, 00:25:53.209 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:53.209 } 00:25:53.209 23:08:21 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:53.209 23:08:21 -- common/autotest_common.sh@643 -- # es=1 00:25:53.209 23:08:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:53.209 23:08:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:53.209 23:08:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:53.209 23:08:21 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:53.209 23:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.210 23:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:53.210 00:25:53.210 23:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.210 23:08:21 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:53.210 23:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.210 23:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:53.471 23:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.471 23:08:21 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:53.471 23:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.471 23:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:53.471 00:25:53.471 23:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.471 23:08:21 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:53.471 23:08:21 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:53.471 23:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:53.471 23:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:53.471 23:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:53.471 23:08:21 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:53.471 23:08:21 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:54.861 0 00:25:54.861 23:08:22 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:54.861 23:08:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.861 23:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:54.861 23:08:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.861 23:08:22 -- host/multicontroller.sh@100 -- # killprocess 19494 00:25:54.861 23:08:22 -- common/autotest_common.sh@926 -- # '[' -z 19494 ']' 00:25:54.861 23:08:22 -- common/autotest_common.sh@930 -- # kill -0 19494 00:25:54.861 23:08:22 -- common/autotest_common.sh@931 -- # uname 00:25:54.861 23:08:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:54.861 23:08:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 19494 00:25:54.861 23:08:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:54.861 23:08:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:54.861 23:08:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 19494' 00:25:54.861 killing process with pid 19494 00:25:54.861 23:08:22 -- common/autotest_common.sh@945 -- # kill 19494 00:25:54.861 23:08:22 -- common/autotest_common.sh@950 -- # wait 19494 00:25:54.861 23:08:22 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:54.861 23:08:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.861 23:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:54.861 23:08:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.861 23:08:22 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:54.861 23:08:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:54.861 23:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:54.861 23:08:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:54.861 23:08:22 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:54.861 23:08:22 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:54.861 23:08:22 -- common/autotest_common.sh@1597 -- # read -r file 00:25:54.861 23:08:22 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:54.861 23:08:22 -- common/autotest_common.sh@1596 -- # sort -u 00:25:54.861 23:08:22 -- common/autotest_common.sh@1598 -- # cat 00:25:54.861 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:54.861 [2024-06-09 23:08:20.152015] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:54.861 [2024-06-09 23:08:20.152079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid19494 ] 00:25:54.861 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.861 [2024-06-09 23:08:20.211354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.861 [2024-06-09 23:08:20.273789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.861 [2024-06-09 23:08:21.621646] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 9fae3737-09e4-4fe6-b797-1bd6d63a44a0 already exists 00:25:54.861 [2024-06-09 23:08:21.621676] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:9fae3737-09e4-4fe6-b797-1bd6d63a44a0 alias for bdev NVMe1n1 00:25:54.861 [2024-06-09 23:08:21.621687] bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:54.861 Running I/O for 1 seconds... 00:25:54.861 00:25:54.861 Latency(us) 00:25:54.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.861 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:54.861 NVMe0n1 : 1.00 28569.92 111.60 0.00 0.00 4466.29 3372.37 20862.29 00:25:54.861 =================================================================================================================== 00:25:54.861 Total : 28569.92 111.60 0.00 0.00 4466.29 3372.37 20862.29 00:25:54.861 Received shutdown signal, test time was about 1.000000 seconds 00:25:54.861 00:25:54.861 Latency(us) 00:25:54.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.861 =================================================================================================================== 00:25:54.861 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:54.861 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:54.861 23:08:22 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:54.861 23:08:22 -- common/autotest_common.sh@1597 -- # read -r file 00:25:54.861 23:08:22 -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:54.861 23:08:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:54.861 23:08:22 -- nvmf/common.sh@116 -- # sync 00:25:54.861 23:08:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:54.861 23:08:22 -- nvmf/common.sh@119 -- # set +e 00:25:54.861 23:08:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:54.861 23:08:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:54.861 rmmod nvme_tcp 00:25:54.861 rmmod nvme_fabrics 00:25:55.123 rmmod nvme_keyring 00:25:55.123 23:08:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:55.123 23:08:23 -- nvmf/common.sh@123 -- # set -e 00:25:55.123 23:08:23 -- nvmf/common.sh@124 -- # return 0 00:25:55.123 23:08:23 -- nvmf/common.sh@477 -- # '[' -n 19419 ']' 00:25:55.123 23:08:23 -- nvmf/common.sh@478 -- # killprocess 19419 00:25:55.123 23:08:23 -- common/autotest_common.sh@926 -- # '[' -z 19419 ']' 00:25:55.123 23:08:23 -- common/autotest_common.sh@930 -- # kill -0 19419 00:25:55.123 23:08:23 -- common/autotest_common.sh@931 -- # uname 00:25:55.123 23:08:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:55.123 23:08:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 19419 00:25:55.123 23:08:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:55.123 23:08:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:55.123 23:08:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 19419' 00:25:55.123 killing process with pid 19419 00:25:55.123 23:08:23 -- common/autotest_common.sh@945 -- # kill 19419 00:25:55.123 23:08:23 -- common/autotest_common.sh@950 -- # wait 19419 00:25:55.123 23:08:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:55.123 23:08:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:55.123 23:08:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:55.123 23:08:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:55.123 23:08:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:55.123 23:08:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.123 23:08:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:55.123 23:08:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.671 23:08:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:57.671 00:25:57.671 real 0m13.535s 00:25:57.671 user 0m17.038s 00:25:57.671 sys 0m6.085s 00:25:57.671 23:08:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.671 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:57.671 ************************************ 00:25:57.671 END TEST nvmf_multicontroller 00:25:57.671 ************************************ 00:25:57.671 23:08:25 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:57.671 23:08:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:57.671 23:08:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:57.671 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:57.671 ************************************ 00:25:57.671 START TEST nvmf_aer 00:25:57.671 ************************************ 00:25:57.672 23:08:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:57.672 * Looking for test storage... 00:25:57.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:57.672 23:08:25 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:57.672 23:08:25 -- nvmf/common.sh@7 -- # uname -s 00:25:57.672 23:08:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.672 23:08:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.672 23:08:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.672 23:08:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.672 23:08:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.672 23:08:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.672 23:08:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.672 23:08:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.672 23:08:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.672 23:08:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.672 23:08:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:57.672 23:08:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:57.672 23:08:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.672 23:08:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.672 23:08:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:57.672 23:08:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:57.672 23:08:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.672 23:08:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.672 23:08:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.672 23:08:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.672 23:08:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.672 23:08:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.672 23:08:25 -- paths/export.sh@5 -- # export PATH 00:25:57.672 23:08:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.672 23:08:25 -- nvmf/common.sh@46 -- # : 0 00:25:57.672 23:08:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:57.672 23:08:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:57.672 23:08:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:57.672 23:08:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.672 23:08:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.672 23:08:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:57.672 23:08:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:57.672 23:08:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:57.672 23:08:25 -- host/aer.sh@11 -- # nvmftestinit 00:25:57.672 23:08:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:57.672 23:08:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.672 23:08:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:57.672 23:08:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:57.672 23:08:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:57.672 23:08:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.672 23:08:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:57.672 23:08:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.672 23:08:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:57.672 23:08:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:57.672 23:08:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:57.672 23:08:25 -- common/autotest_common.sh@10 -- # set +x 00:26:04.268 23:08:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:04.268 23:08:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:04.268 23:08:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:04.268 23:08:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:04.268 23:08:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:04.268 23:08:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:04.268 23:08:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:04.268 23:08:32 -- nvmf/common.sh@294 -- # net_devs=() 00:26:04.268 23:08:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:04.268 23:08:32 -- nvmf/common.sh@295 -- # e810=() 00:26:04.268 23:08:32 -- nvmf/common.sh@295 -- # local -ga e810 00:26:04.268 23:08:32 -- nvmf/common.sh@296 -- # x722=() 00:26:04.268 23:08:32 -- nvmf/common.sh@296 -- # local -ga x722 00:26:04.268 23:08:32 -- nvmf/common.sh@297 -- # mlx=() 00:26:04.268 23:08:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:04.268 23:08:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.268 23:08:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.268 23:08:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.268 23:08:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.268 23:08:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.268 23:08:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.268 23:08:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.268 23:08:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.268 23:08:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.268 23:08:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.268 23:08:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.268 23:08:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:04.268 23:08:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:04.268 23:08:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:04.268 23:08:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:04.268 23:08:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:04.268 23:08:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:04.268 23:08:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:04.268 23:08:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:04.268 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:04.268 23:08:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:04.268 23:08:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:04.268 23:08:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.268 23:08:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.268 23:08:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:04.268 23:08:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:04.268 23:08:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:04.268 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:04.268 23:08:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:04.268 23:08:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:04.268 23:08:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.268 23:08:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.268 23:08:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:04.268 23:08:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:04.268 23:08:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:04.268 23:08:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:04.268 23:08:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:04.269 23:08:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.269 23:08:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:04.269 23:08:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.269 23:08:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:04.269 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:04.269 23:08:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.269 23:08:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:04.269 23:08:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.269 23:08:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:04.269 23:08:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.269 23:08:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:04.269 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:04.269 23:08:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.269 23:08:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:04.269 23:08:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:04.269 23:08:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:04.269 23:08:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:04.269 23:08:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:04.269 23:08:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.269 23:08:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.269 23:08:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.269 23:08:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:04.269 23:08:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.269 23:08:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.269 23:08:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:04.269 23:08:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.269 23:08:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.269 23:08:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:04.269 23:08:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:04.269 23:08:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.269 23:08:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.269 23:08:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.269 23:08:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.269 23:08:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:04.269 23:08:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.269 23:08:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.269 23:08:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.269 23:08:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:04.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:26:04.269 00:26:04.269 --- 10.0.0.2 ping statistics --- 00:26:04.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.269 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:26:04.269 23:08:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:26:04.269 00:26:04.269 --- 10.0.0.1 ping statistics --- 00:26:04.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.269 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:26:04.269 23:08:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.269 23:08:32 -- nvmf/common.sh@410 -- # return 0 00:26:04.269 23:08:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:04.269 23:08:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.269 23:08:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:04.269 23:08:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:04.269 23:08:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.269 23:08:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:04.269 23:08:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:04.269 23:08:32 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:04.269 23:08:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:04.269 23:08:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:04.269 23:08:32 -- common/autotest_common.sh@10 -- # set +x 00:26:04.269 23:08:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:04.269 23:08:32 -- nvmf/common.sh@469 -- # nvmfpid=24194 00:26:04.269 23:08:32 -- nvmf/common.sh@470 -- # waitforlisten 24194 00:26:04.269 23:08:32 -- common/autotest_common.sh@819 -- # '[' -z 24194 ']' 00:26:04.269 23:08:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.269 23:08:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:04.269 23:08:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.269 23:08:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:04.269 23:08:32 -- common/autotest_common.sh@10 -- # set +x 00:26:04.530 [2024-06-09 23:08:32.473645] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:04.530 [2024-06-09 23:08:32.473707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.530 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.530 [2024-06-09 23:08:32.539990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:04.530 [2024-06-09 23:08:32.605277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:04.530 [2024-06-09 23:08:32.605390] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.530 [2024-06-09 23:08:32.605399] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.530 [2024-06-09 23:08:32.605411] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.530 [2024-06-09 23:08:32.605515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.530 [2024-06-09 23:08:32.605664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:04.530 [2024-06-09 23:08:32.605784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.530 [2024-06-09 23:08:32.605785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:05.102 23:08:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:05.102 23:08:33 -- common/autotest_common.sh@852 -- # return 0 00:26:05.102 23:08:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:05.102 23:08:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:05.103 23:08:33 -- common/autotest_common.sh@10 -- # set +x 00:26:05.364 23:08:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:05.364 23:08:33 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:05.364 23:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.364 23:08:33 -- common/autotest_common.sh@10 -- # set +x 00:26:05.364 [2024-06-09 23:08:33.296644] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.364 23:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.364 23:08:33 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:05.364 23:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.364 23:08:33 -- common/autotest_common.sh@10 -- # set +x 00:26:05.364 Malloc0 00:26:05.364 23:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.364 23:08:33 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:05.364 23:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.364 23:08:33 -- common/autotest_common.sh@10 -- # set +x 00:26:05.364 23:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.364 23:08:33 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:05.364 23:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.364 23:08:33 -- common/autotest_common.sh@10 -- # set +x 00:26:05.364 23:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.364 23:08:33 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.364 23:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.364 23:08:33 -- common/autotest_common.sh@10 -- # set +x 00:26:05.364 [2024-06-09 23:08:33.355955] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.364 23:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.364 23:08:33 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:05.364 23:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.364 23:08:33 -- common/autotest_common.sh@10 -- # set +x 00:26:05.364 [2024-06-09 23:08:33.367770] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:05.364 [ 00:26:05.364 { 00:26:05.364 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:05.364 "subtype": "Discovery", 00:26:05.365 "listen_addresses": [], 00:26:05.365 "allow_any_host": true, 00:26:05.365 "hosts": [] 00:26:05.365 }, 00:26:05.365 { 00:26:05.365 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:05.365 "subtype": "NVMe", 00:26:05.365 "listen_addresses": [ 00:26:05.365 { 00:26:05.365 "transport": "TCP", 00:26:05.365 "trtype": "TCP", 00:26:05.365 "adrfam": "IPv4", 00:26:05.365 "traddr": "10.0.0.2", 00:26:05.365 "trsvcid": "4420" 00:26:05.365 } 00:26:05.365 ], 00:26:05.365 "allow_any_host": true, 00:26:05.365 "hosts": [], 00:26:05.365 "serial_number": "SPDK00000000000001", 00:26:05.365 "model_number": "SPDK bdev Controller", 00:26:05.365 "max_namespaces": 2, 00:26:05.365 "min_cntlid": 1, 00:26:05.365 "max_cntlid": 65519, 00:26:05.365 "namespaces": [ 00:26:05.365 { 00:26:05.365 "nsid": 1, 00:26:05.365 "bdev_name": "Malloc0", 00:26:05.365 "name": "Malloc0", 00:26:05.365 "nguid": "7F537A9AD6814C60A463272EAAE1BF54", 00:26:05.365 "uuid": "7f537a9a-d681-4c60-a463-272eaae1bf54" 00:26:05.365 } 00:26:05.365 ] 00:26:05.365 } 00:26:05.365 ] 00:26:05.365 23:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.365 23:08:33 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:05.365 23:08:33 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:05.365 23:08:33 -- host/aer.sh@33 -- # aerpid=24548 00:26:05.365 23:08:33 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:05.365 23:08:33 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:05.365 23:08:33 -- common/autotest_common.sh@1244 -- # local i=0 00:26:05.365 23:08:33 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:05.365 23:08:33 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:26:05.365 23:08:33 -- common/autotest_common.sh@1247 -- # i=1 00:26:05.365 23:08:33 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:05.365 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.365 23:08:33 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:05.365 23:08:33 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:26:05.365 23:08:33 -- common/autotest_common.sh@1247 -- # i=2 00:26:05.365 23:08:33 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:05.627 23:08:33 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:05.627 23:08:33 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:26:05.627 23:08:33 -- common/autotest_common.sh@1247 -- # i=3 00:26:05.627 23:08:33 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:05.627 23:08:33 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:05.627 23:08:33 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:05.627 23:08:33 -- common/autotest_common.sh@1255 -- # return 0 00:26:05.627 23:08:33 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:05.627 23:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.627 23:08:33 -- common/autotest_common.sh@10 -- # set +x 00:26:05.627 Malloc1 00:26:05.627 23:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.627 23:08:33 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:05.627 23:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.627 23:08:33 -- common/autotest_common.sh@10 -- # set +x 00:26:05.627 23:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.627 23:08:33 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:05.627 23:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.627 23:08:33 -- common/autotest_common.sh@10 -- # set +x 00:26:05.627 [ 00:26:05.627 { 00:26:05.627 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:05.627 "subtype": "Discovery", 00:26:05.627 "listen_addresses": [], 00:26:05.627 "allow_any_host": true, 00:26:05.627 "hosts": [] 00:26:05.627 }, 00:26:05.627 { 00:26:05.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:05.627 "subtype": "NVMe", 00:26:05.627 "listen_addresses": [ 00:26:05.627 { 00:26:05.627 "transport": "TCP", 00:26:05.627 "trtype": "TCP", 00:26:05.627 "adrfam": "IPv4", 00:26:05.627 "traddr": "10.0.0.2", 00:26:05.627 "trsvcid": "4420" 00:26:05.627 } 00:26:05.627 ], 00:26:05.628 "allow_any_host": true, 00:26:05.628 "hosts": [], 00:26:05.628 "serial_number": "SPDK00000000000001", 00:26:05.628 "model_number": "SPDK bdev Controller", 00:26:05.628 "max_namespaces": 2, 00:26:05.628 "min_cntlid": 1, 00:26:05.628 "max_cntlid": 65519, 00:26:05.628 "namespaces": [ 00:26:05.628 { 00:26:05.628 "nsid": 1, 00:26:05.628 "bdev_name": "Malloc0", 00:26:05.628 "name": "Malloc0", 00:26:05.628 "nguid": "7F537A9AD6814C60A463272EAAE1BF54", 00:26:05.628 "uuid": "7f537a9a-d681-4c60-a463-272eaae1bf54" 00:26:05.628 }, 00:26:05.628 Asynchronous Event Request test 00:26:05.628 Attaching to 10.0.0.2 00:26:05.628 Attached to 10.0.0.2 00:26:05.628 Registering asynchronous event callbacks... 00:26:05.628 Starting namespace attribute notice tests for all controllers... 00:26:05.628 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:05.628 aer_cb - Changed Namespace 00:26:05.628 Cleaning up... 00:26:05.628 { 00:26:05.628 "nsid": 2, 00:26:05.628 "bdev_name": "Malloc1", 00:26:05.628 "name": "Malloc1", 00:26:05.628 "nguid": "407D442D74D54A5E90D3BEC2CE56A904", 00:26:05.628 "uuid": "407d442d-74d5-4a5e-90d3-bec2ce56a904" 00:26:05.628 } 00:26:05.628 ] 00:26:05.628 } 00:26:05.628 ] 00:26:05.628 23:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.628 23:08:33 -- host/aer.sh@43 -- # wait 24548 00:26:05.628 23:08:33 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:05.628 23:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.628 23:08:33 -- common/autotest_common.sh@10 -- # set +x 00:26:05.628 23:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.628 23:08:33 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:05.628 23:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.628 23:08:33 -- common/autotest_common.sh@10 -- # set +x 00:26:05.628 23:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.628 23:08:33 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:05.628 23:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.628 23:08:33 -- common/autotest_common.sh@10 -- # set +x 00:26:05.890 23:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.890 23:08:33 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:05.890 23:08:33 -- host/aer.sh@51 -- # nvmftestfini 00:26:05.890 23:08:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:05.890 23:08:33 -- nvmf/common.sh@116 -- # sync 00:26:05.890 23:08:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:05.890 23:08:33 -- nvmf/common.sh@119 -- # set +e 00:26:05.890 23:08:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:05.890 23:08:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:05.890 rmmod nvme_tcp 00:26:05.890 rmmod nvme_fabrics 00:26:05.890 rmmod nvme_keyring 00:26:05.890 23:08:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:05.890 23:08:33 -- nvmf/common.sh@123 -- # set -e 00:26:05.890 23:08:33 -- nvmf/common.sh@124 -- # return 0 00:26:05.890 23:08:33 -- nvmf/common.sh@477 -- # '[' -n 24194 ']' 00:26:05.890 23:08:33 -- nvmf/common.sh@478 -- # killprocess 24194 00:26:05.890 23:08:33 -- common/autotest_common.sh@926 -- # '[' -z 24194 ']' 00:26:05.890 23:08:33 -- common/autotest_common.sh@930 -- # kill -0 24194 00:26:05.890 23:08:33 -- common/autotest_common.sh@931 -- # uname 00:26:05.890 23:08:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:05.890 23:08:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 24194 00:26:05.890 23:08:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:05.890 23:08:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:05.890 23:08:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 24194' 00:26:05.890 killing process with pid 24194 00:26:05.890 23:08:33 -- common/autotest_common.sh@945 -- # kill 24194 00:26:05.890 [2024-06-09 23:08:33.936085] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:05.890 23:08:33 -- common/autotest_common.sh@950 -- # wait 24194 00:26:05.890 23:08:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:05.890 23:08:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:05.890 23:08:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:06.152 23:08:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:06.152 23:08:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:06.152 23:08:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.152 23:08:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:06.152 23:08:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.070 23:08:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:08.070 00:26:08.070 real 0m10.744s 00:26:08.070 user 0m7.801s 00:26:08.070 sys 0m5.550s 00:26:08.070 23:08:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:08.070 23:08:36 -- common/autotest_common.sh@10 -- # set +x 00:26:08.070 ************************************ 00:26:08.070 END TEST nvmf_aer 00:26:08.070 ************************************ 00:26:08.070 23:08:36 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:08.070 23:08:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:08.070 23:08:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:08.070 23:08:36 -- common/autotest_common.sh@10 -- # set +x 00:26:08.070 ************************************ 00:26:08.070 START TEST nvmf_async_init 00:26:08.070 ************************************ 00:26:08.070 23:08:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:08.332 * Looking for test storage... 00:26:08.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:08.332 23:08:36 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:08.332 23:08:36 -- nvmf/common.sh@7 -- # uname -s 00:26:08.332 23:08:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.332 23:08:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.332 23:08:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.332 23:08:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.332 23:08:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.332 23:08:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.332 23:08:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.332 23:08:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.332 23:08:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.332 23:08:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.332 23:08:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:08.332 23:08:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:08.332 23:08:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.332 23:08:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.332 23:08:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:08.332 23:08:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:08.332 23:08:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.332 23:08:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.332 23:08:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.332 23:08:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.332 23:08:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.332 23:08:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.332 23:08:36 -- paths/export.sh@5 -- # export PATH 00:26:08.332 23:08:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.332 23:08:36 -- nvmf/common.sh@46 -- # : 0 00:26:08.332 23:08:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:08.332 23:08:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:08.332 23:08:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:08.332 23:08:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.332 23:08:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.332 23:08:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:08.332 23:08:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:08.332 23:08:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:08.332 23:08:36 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:08.332 23:08:36 -- host/async_init.sh@14 -- # null_block_size=512 00:26:08.332 23:08:36 -- host/async_init.sh@15 -- # null_bdev=null0 00:26:08.332 23:08:36 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:08.332 23:08:36 -- host/async_init.sh@20 -- # uuidgen 00:26:08.332 23:08:36 -- host/async_init.sh@20 -- # tr -d - 00:26:08.332 23:08:36 -- host/async_init.sh@20 -- # nguid=f09c2680c5e34ce1a186d870d4b78bd9 00:26:08.332 23:08:36 -- host/async_init.sh@22 -- # nvmftestinit 00:26:08.332 23:08:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:08.332 23:08:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:08.332 23:08:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:08.332 23:08:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:08.332 23:08:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:08.332 23:08:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.332 23:08:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:08.332 23:08:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.332 23:08:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:08.332 23:08:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:08.332 23:08:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:08.332 23:08:36 -- common/autotest_common.sh@10 -- # set +x 00:26:16.479 23:08:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:16.479 23:08:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:16.479 23:08:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:16.479 23:08:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:16.479 23:08:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:16.479 23:08:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:16.479 23:08:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:16.479 23:08:43 -- nvmf/common.sh@294 -- # net_devs=() 00:26:16.479 23:08:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:16.479 23:08:43 -- nvmf/common.sh@295 -- # e810=() 00:26:16.479 23:08:43 -- nvmf/common.sh@295 -- # local -ga e810 00:26:16.479 23:08:43 -- nvmf/common.sh@296 -- # x722=() 00:26:16.479 23:08:43 -- nvmf/common.sh@296 -- # local -ga x722 00:26:16.479 23:08:43 -- nvmf/common.sh@297 -- # mlx=() 00:26:16.479 23:08:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:16.479 23:08:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.479 23:08:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.479 23:08:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.479 23:08:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.479 23:08:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.479 23:08:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.479 23:08:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.479 23:08:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.479 23:08:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.479 23:08:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.479 23:08:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.479 23:08:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:16.479 23:08:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:16.479 23:08:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:16.479 23:08:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:16.479 23:08:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:16.479 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:16.479 23:08:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:16.479 23:08:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:16.479 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:16.479 23:08:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:16.479 23:08:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:16.479 23:08:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.479 23:08:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:16.479 23:08:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.479 23:08:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:16.479 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:16.479 23:08:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.479 23:08:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:16.479 23:08:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.479 23:08:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:16.479 23:08:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.479 23:08:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:16.479 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:16.479 23:08:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.479 23:08:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:16.479 23:08:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:16.479 23:08:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:16.479 23:08:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.479 23:08:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.479 23:08:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.479 23:08:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:16.479 23:08:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:16.479 23:08:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:16.479 23:08:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:16.479 23:08:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:16.479 23:08:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.479 23:08:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:16.479 23:08:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:16.479 23:08:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:16.479 23:08:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:16.479 23:08:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:16.479 23:08:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:16.479 23:08:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:16.479 23:08:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.479 23:08:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:16.479 23:08:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:16.479 23:08:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:16.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:26:16.479 00:26:16.479 --- 10.0.0.2 ping statistics --- 00:26:16.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.479 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:26:16.479 23:08:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:16.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:26:16.479 00:26:16.479 --- 10.0.0.1 ping statistics --- 00:26:16.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.479 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:26:16.479 23:08:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.479 23:08:43 -- nvmf/common.sh@410 -- # return 0 00:26:16.479 23:08:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:16.479 23:08:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.479 23:08:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:16.479 23:08:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.479 23:08:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:16.479 23:08:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:16.479 23:08:43 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:16.479 23:08:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:16.479 23:08:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:16.479 23:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:16.479 23:08:43 -- nvmf/common.sh@469 -- # nvmfpid=28733 00:26:16.479 23:08:43 -- nvmf/common.sh@470 -- # waitforlisten 28733 00:26:16.479 23:08:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:16.479 23:08:43 -- common/autotest_common.sh@819 -- # '[' -z 28733 ']' 00:26:16.479 23:08:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.479 23:08:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:16.479 23:08:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.479 23:08:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:16.479 23:08:43 -- common/autotest_common.sh@10 -- # set +x 00:26:16.479 [2024-06-09 23:08:43.656798] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:16.479 [2024-06-09 23:08:43.656859] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.479 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.479 [2024-06-09 23:08:43.723957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.479 [2024-06-09 23:08:43.790056] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:16.479 [2024-06-09 23:08:43.790171] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.479 [2024-06-09 23:08:43.790186] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.479 [2024-06-09 23:08:43.790193] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.479 [2024-06-09 23:08:43.790212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.479 23:08:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:16.479 23:08:44 -- common/autotest_common.sh@852 -- # return 0 00:26:16.479 23:08:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:16.479 23:08:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:16.480 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:16.480 23:08:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.480 23:08:44 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:16.480 23:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.480 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:16.480 [2024-06-09 23:08:44.456946] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.480 23:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.480 23:08:44 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:16.480 23:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.480 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:16.480 null0 00:26:16.480 23:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.480 23:08:44 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:16.480 23:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.480 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:16.480 23:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.480 23:08:44 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:16.480 23:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.480 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:16.480 23:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.480 23:08:44 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f09c2680c5e34ce1a186d870d4b78bd9 00:26:16.480 23:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.480 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:16.480 23:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.480 23:08:44 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:16.480 23:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.480 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:16.480 [2024-06-09 23:08:44.497151] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.480 23:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.480 23:08:44 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:16.480 23:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.480 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:16.741 nvme0n1 00:26:16.741 23:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.741 23:08:44 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:16.741 23:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.741 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:16.741 [ 00:26:16.741 { 00:26:16.741 "name": "nvme0n1", 00:26:16.741 "aliases": [ 00:26:16.741 "f09c2680-c5e3-4ce1-a186-d870d4b78bd9" 00:26:16.741 ], 00:26:16.741 "product_name": "NVMe disk", 00:26:16.741 "block_size": 512, 00:26:16.741 "num_blocks": 2097152, 00:26:16.741 "uuid": "f09c2680-c5e3-4ce1-a186-d870d4b78bd9", 00:26:16.741 "assigned_rate_limits": { 00:26:16.741 "rw_ios_per_sec": 0, 00:26:16.741 "rw_mbytes_per_sec": 0, 00:26:16.741 "r_mbytes_per_sec": 0, 00:26:16.741 "w_mbytes_per_sec": 0 00:26:16.741 }, 00:26:16.741 "claimed": false, 00:26:16.741 "zoned": false, 00:26:16.741 "supported_io_types": { 00:26:16.741 "read": true, 00:26:16.741 "write": true, 00:26:16.741 "unmap": false, 00:26:16.741 "write_zeroes": true, 00:26:16.741 "flush": true, 00:26:16.741 "reset": true, 00:26:16.741 "compare": true, 00:26:16.741 "compare_and_write": true, 00:26:16.741 "abort": true, 00:26:16.741 "nvme_admin": true, 00:26:16.741 "nvme_io": true 00:26:16.741 }, 00:26:16.741 "driver_specific": { 00:26:16.741 "nvme": [ 00:26:16.741 { 00:26:16.741 "trid": { 00:26:16.741 "trtype": "TCP", 00:26:16.741 "adrfam": "IPv4", 00:26:16.741 "traddr": "10.0.0.2", 00:26:16.741 "trsvcid": "4420", 00:26:16.741 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:16.741 }, 00:26:16.741 "ctrlr_data": { 00:26:16.741 "cntlid": 1, 00:26:16.741 "vendor_id": "0x8086", 00:26:16.741 "model_number": "SPDK bdev Controller", 00:26:16.741 "serial_number": "00000000000000000000", 00:26:16.741 "firmware_revision": "24.01.1", 00:26:16.741 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:16.741 "oacs": { 00:26:16.742 "security": 0, 00:26:16.742 "format": 0, 00:26:16.742 "firmware": 0, 00:26:16.742 "ns_manage": 0 00:26:16.742 }, 00:26:16.742 "multi_ctrlr": true, 00:26:16.742 "ana_reporting": false 00:26:16.742 }, 00:26:16.742 "vs": { 00:26:16.742 "nvme_version": "1.3" 00:26:16.742 }, 00:26:16.742 "ns_data": { 00:26:16.742 "id": 1, 00:26:16.742 "can_share": true 00:26:16.742 } 00:26:16.742 } 00:26:16.742 ], 00:26:16.742 "mp_policy": "active_passive" 00:26:16.742 } 00:26:16.742 } 00:26:16.742 ] 00:26:16.742 23:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.742 23:08:44 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:16.742 23:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.742 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:16.742 [2024-06-09 23:08:44.749701] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:16.742 [2024-06-09 23:08:44.749760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x968a50 (9): Bad file descriptor 00:26:16.742 [2024-06-09 23:08:44.881497] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:16.742 23:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.742 23:08:44 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:16.742 23:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.742 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:16.742 [ 00:26:16.742 { 00:26:16.742 "name": "nvme0n1", 00:26:16.742 "aliases": [ 00:26:16.742 "f09c2680-c5e3-4ce1-a186-d870d4b78bd9" 00:26:16.742 ], 00:26:16.742 "product_name": "NVMe disk", 00:26:16.742 "block_size": 512, 00:26:16.742 "num_blocks": 2097152, 00:26:16.742 "uuid": "f09c2680-c5e3-4ce1-a186-d870d4b78bd9", 00:26:16.742 "assigned_rate_limits": { 00:26:16.742 "rw_ios_per_sec": 0, 00:26:16.742 "rw_mbytes_per_sec": 0, 00:26:16.742 "r_mbytes_per_sec": 0, 00:26:16.742 "w_mbytes_per_sec": 0 00:26:16.742 }, 00:26:16.742 "claimed": false, 00:26:16.742 "zoned": false, 00:26:16.742 "supported_io_types": { 00:26:16.742 "read": true, 00:26:16.742 "write": true, 00:26:16.742 "unmap": false, 00:26:16.742 "write_zeroes": true, 00:26:16.742 "flush": true, 00:26:16.742 "reset": true, 00:26:16.742 "compare": true, 00:26:16.742 "compare_and_write": true, 00:26:16.742 "abort": true, 00:26:16.742 "nvme_admin": true, 00:26:16.742 "nvme_io": true 00:26:16.742 }, 00:26:16.742 "driver_specific": { 00:26:16.742 "nvme": [ 00:26:16.742 { 00:26:16.742 "trid": { 00:26:16.742 "trtype": "TCP", 00:26:16.742 "adrfam": "IPv4", 00:26:16.742 "traddr": "10.0.0.2", 00:26:16.742 "trsvcid": "4420", 00:26:16.742 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:16.742 }, 00:26:16.742 "ctrlr_data": { 00:26:16.742 "cntlid": 2, 00:26:16.742 "vendor_id": "0x8086", 00:26:16.742 "model_number": "SPDK bdev Controller", 00:26:16.742 "serial_number": "00000000000000000000", 00:26:16.742 "firmware_revision": "24.01.1", 00:26:16.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:16.742 "oacs": { 00:26:16.742 "security": 0, 00:26:16.742 "format": 0, 00:26:16.742 "firmware": 0, 00:26:16.742 "ns_manage": 0 00:26:16.742 }, 00:26:16.742 "multi_ctrlr": true, 00:26:16.742 "ana_reporting": false 00:26:16.742 }, 00:26:16.742 "vs": { 00:26:16.742 "nvme_version": "1.3" 00:26:16.742 }, 00:26:16.742 "ns_data": { 00:26:16.742 "id": 1, 00:26:16.742 "can_share": true 00:26:16.742 } 00:26:16.742 } 00:26:16.742 ], 00:26:16.742 "mp_policy": "active_passive" 00:26:16.742 } 00:26:16.742 } 00:26:16.742 ] 00:26:16.742 23:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.742 23:08:44 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.742 23:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:16.742 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:16.742 23:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:16.742 23:08:44 -- host/async_init.sh@53 -- # mktemp 00:26:17.009 23:08:44 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.7cCdV721uL 00:26:17.009 23:08:44 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:17.009 23:08:44 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.7cCdV721uL 00:26:17.009 23:08:44 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:17.009 23:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:17.009 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:17.009 23:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:17.009 23:08:44 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:17.009 23:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:17.009 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:17.009 [2024-06-09 23:08:44.938306] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:17.009 [2024-06-09 23:08:44.938428] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:17.009 23:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:17.009 23:08:44 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7cCdV721uL 00:26:17.009 23:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:17.009 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:17.009 23:08:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:17.009 23:08:44 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7cCdV721uL 00:26:17.009 23:08:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:17.009 23:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:17.009 [2024-06-09 23:08:44.954346] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:17.009 nvme0n1 00:26:17.009 23:08:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:17.009 23:08:45 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:17.009 23:08:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:17.009 23:08:45 -- common/autotest_common.sh@10 -- # set +x 00:26:17.009 [ 00:26:17.009 { 00:26:17.009 "name": "nvme0n1", 00:26:17.009 "aliases": [ 00:26:17.009 "f09c2680-c5e3-4ce1-a186-d870d4b78bd9" 00:26:17.009 ], 00:26:17.009 "product_name": "NVMe disk", 00:26:17.009 "block_size": 512, 00:26:17.009 "num_blocks": 2097152, 00:26:17.009 "uuid": "f09c2680-c5e3-4ce1-a186-d870d4b78bd9", 00:26:17.009 "assigned_rate_limits": { 00:26:17.009 "rw_ios_per_sec": 0, 00:26:17.009 "rw_mbytes_per_sec": 0, 00:26:17.009 "r_mbytes_per_sec": 0, 00:26:17.009 "w_mbytes_per_sec": 0 00:26:17.009 }, 00:26:17.009 "claimed": false, 00:26:17.009 "zoned": false, 00:26:17.009 "supported_io_types": { 00:26:17.009 "read": true, 00:26:17.009 "write": true, 00:26:17.009 "unmap": false, 00:26:17.009 "write_zeroes": true, 00:26:17.009 "flush": true, 00:26:17.009 "reset": true, 00:26:17.009 "compare": true, 00:26:17.009 "compare_and_write": true, 00:26:17.009 "abort": true, 00:26:17.009 "nvme_admin": true, 00:26:17.009 "nvme_io": true 00:26:17.009 }, 00:26:17.009 "driver_specific": { 00:26:17.009 "nvme": [ 00:26:17.009 { 00:26:17.009 "trid": { 00:26:17.009 "trtype": "TCP", 00:26:17.009 "adrfam": "IPv4", 00:26:17.009 "traddr": "10.0.0.2", 00:26:17.009 "trsvcid": "4421", 00:26:17.009 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:17.009 }, 00:26:17.009 "ctrlr_data": { 00:26:17.009 "cntlid": 3, 00:26:17.009 "vendor_id": "0x8086", 00:26:17.009 "model_number": "SPDK bdev Controller", 00:26:17.009 "serial_number": "00000000000000000000", 00:26:17.009 "firmware_revision": "24.01.1", 00:26:17.009 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:17.009 "oacs": { 00:26:17.009 "security": 0, 00:26:17.009 "format": 0, 00:26:17.009 "firmware": 0, 00:26:17.009 "ns_manage": 0 00:26:17.009 }, 00:26:17.009 "multi_ctrlr": true, 00:26:17.009 "ana_reporting": false 00:26:17.009 }, 00:26:17.009 "vs": { 00:26:17.009 "nvme_version": "1.3" 00:26:17.009 }, 00:26:17.009 "ns_data": { 00:26:17.009 "id": 1, 00:26:17.009 "can_share": true 00:26:17.009 } 00:26:17.009 } 00:26:17.009 ], 00:26:17.009 "mp_policy": "active_passive" 00:26:17.009 } 00:26:17.009 } 00:26:17.009 ] 00:26:17.009 23:08:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:17.009 23:08:45 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.009 23:08:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:17.009 23:08:45 -- common/autotest_common.sh@10 -- # set +x 00:26:17.009 23:08:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:17.009 23:08:45 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.7cCdV721uL 00:26:17.009 23:08:45 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:17.009 23:08:45 -- host/async_init.sh@78 -- # nvmftestfini 00:26:17.009 23:08:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:17.009 23:08:45 -- nvmf/common.sh@116 -- # sync 00:26:17.010 23:08:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:17.010 23:08:45 -- nvmf/common.sh@119 -- # set +e 00:26:17.010 23:08:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:17.010 23:08:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:17.010 rmmod nvme_tcp 00:26:17.010 rmmod nvme_fabrics 00:26:17.010 rmmod nvme_keyring 00:26:17.010 23:08:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:17.010 23:08:45 -- nvmf/common.sh@123 -- # set -e 00:26:17.010 23:08:45 -- nvmf/common.sh@124 -- # return 0 00:26:17.010 23:08:45 -- nvmf/common.sh@477 -- # '[' -n 28733 ']' 00:26:17.010 23:08:45 -- nvmf/common.sh@478 -- # killprocess 28733 00:26:17.010 23:08:45 -- common/autotest_common.sh@926 -- # '[' -z 28733 ']' 00:26:17.010 23:08:45 -- common/autotest_common.sh@930 -- # kill -0 28733 00:26:17.010 23:08:45 -- common/autotest_common.sh@931 -- # uname 00:26:17.010 23:08:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:17.010 23:08:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 28733 00:26:17.306 23:08:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:17.306 23:08:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:17.306 23:08:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 28733' 00:26:17.306 killing process with pid 28733 00:26:17.306 23:08:45 -- common/autotest_common.sh@945 -- # kill 28733 00:26:17.306 23:08:45 -- common/autotest_common.sh@950 -- # wait 28733 00:26:17.306 23:08:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:17.306 23:08:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:17.306 23:08:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:17.307 23:08:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:17.307 23:08:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:17.307 23:08:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.307 23:08:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:17.307 23:08:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.235 23:08:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:19.235 00:26:19.235 real 0m11.202s 00:26:19.235 user 0m4.008s 00:26:19.235 sys 0m5.625s 00:26:19.235 23:08:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:19.235 23:08:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.235 ************************************ 00:26:19.235 END TEST nvmf_async_init 00:26:19.235 ************************************ 00:26:19.497 23:08:47 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:19.497 23:08:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:19.497 23:08:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:19.497 23:08:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.497 ************************************ 00:26:19.497 START TEST dma 00:26:19.497 ************************************ 00:26:19.497 23:08:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:19.497 * Looking for test storage... 00:26:19.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:19.497 23:08:47 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.497 23:08:47 -- nvmf/common.sh@7 -- # uname -s 00:26:19.497 23:08:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.497 23:08:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.497 23:08:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.497 23:08:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.497 23:08:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.497 23:08:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.497 23:08:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.497 23:08:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.497 23:08:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.497 23:08:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.497 23:08:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:19.497 23:08:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:19.497 23:08:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.497 23:08:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.497 23:08:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.497 23:08:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:19.497 23:08:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.497 23:08:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.497 23:08:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.497 23:08:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.497 23:08:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.497 23:08:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.497 23:08:47 -- paths/export.sh@5 -- # export PATH 00:26:19.497 23:08:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.497 23:08:47 -- nvmf/common.sh@46 -- # : 0 00:26:19.497 23:08:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:19.497 23:08:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:19.497 23:08:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:19.497 23:08:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.497 23:08:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.497 23:08:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:19.497 23:08:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:19.497 23:08:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:19.497 23:08:47 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:19.497 23:08:47 -- host/dma.sh@13 -- # exit 0 00:26:19.497 00:26:19.497 real 0m0.125s 00:26:19.497 user 0m0.054s 00:26:19.497 sys 0m0.080s 00:26:19.497 23:08:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:19.497 23:08:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.497 ************************************ 00:26:19.497 END TEST dma 00:26:19.497 ************************************ 00:26:19.497 23:08:47 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:19.497 23:08:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:19.497 23:08:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:19.497 23:08:47 -- common/autotest_common.sh@10 -- # set +x 00:26:19.497 ************************************ 00:26:19.497 START TEST nvmf_identify 00:26:19.497 ************************************ 00:26:19.497 23:08:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:19.759 * Looking for test storage... 00:26:19.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:19.759 23:08:47 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.759 23:08:47 -- nvmf/common.sh@7 -- # uname -s 00:26:19.759 23:08:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.759 23:08:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.759 23:08:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.759 23:08:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.759 23:08:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.759 23:08:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.759 23:08:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.759 23:08:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.759 23:08:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.759 23:08:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.759 23:08:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:19.759 23:08:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:19.759 23:08:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.759 23:08:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.759 23:08:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.759 23:08:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:19.759 23:08:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.759 23:08:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.759 23:08:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.759 23:08:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.759 23:08:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.759 23:08:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.759 23:08:47 -- paths/export.sh@5 -- # export PATH 00:26:19.759 23:08:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.759 23:08:47 -- nvmf/common.sh@46 -- # : 0 00:26:19.759 23:08:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:19.759 23:08:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:19.759 23:08:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:19.759 23:08:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.759 23:08:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.759 23:08:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:19.759 23:08:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:19.759 23:08:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:19.759 23:08:47 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:19.759 23:08:47 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:19.759 23:08:47 -- host/identify.sh@14 -- # nvmftestinit 00:26:19.759 23:08:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:19.759 23:08:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.759 23:08:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:19.759 23:08:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:19.759 23:08:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:19.759 23:08:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.759 23:08:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.759 23:08:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.759 23:08:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:19.759 23:08:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:19.759 23:08:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:19.759 23:08:47 -- common/autotest_common.sh@10 -- # set +x 00:26:26.360 23:08:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:26.360 23:08:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:26.360 23:08:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:26.360 23:08:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:26.360 23:08:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:26.360 23:08:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:26.360 23:08:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:26.360 23:08:54 -- nvmf/common.sh@294 -- # net_devs=() 00:26:26.360 23:08:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:26.360 23:08:54 -- nvmf/common.sh@295 -- # e810=() 00:26:26.360 23:08:54 -- nvmf/common.sh@295 -- # local -ga e810 00:26:26.360 23:08:54 -- nvmf/common.sh@296 -- # x722=() 00:26:26.360 23:08:54 -- nvmf/common.sh@296 -- # local -ga x722 00:26:26.360 23:08:54 -- nvmf/common.sh@297 -- # mlx=() 00:26:26.360 23:08:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:26.360 23:08:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.360 23:08:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.360 23:08:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.360 23:08:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.360 23:08:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.360 23:08:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.360 23:08:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.360 23:08:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.360 23:08:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.360 23:08:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.360 23:08:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.360 23:08:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:26.360 23:08:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:26.360 23:08:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:26.360 23:08:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:26.360 23:08:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:26.360 23:08:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:26.360 23:08:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:26.361 23:08:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:26.361 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:26.361 23:08:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:26.361 23:08:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:26.361 23:08:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.361 23:08:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.361 23:08:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:26.361 23:08:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:26.361 23:08:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:26.361 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:26.361 23:08:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:26.361 23:08:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:26.361 23:08:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.361 23:08:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.361 23:08:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:26.361 23:08:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:26.361 23:08:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:26.361 23:08:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:26.361 23:08:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:26.361 23:08:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.361 23:08:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:26.361 23:08:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.361 23:08:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:26.361 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:26.361 23:08:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.361 23:08:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:26.361 23:08:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.361 23:08:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:26.361 23:08:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.361 23:08:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:26.361 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:26.361 23:08:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.361 23:08:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:26.361 23:08:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:26.361 23:08:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:26.361 23:08:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:26.361 23:08:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:26.361 23:08:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.361 23:08:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.361 23:08:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.361 23:08:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:26.361 23:08:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.361 23:08:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.361 23:08:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:26.361 23:08:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.361 23:08:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.361 23:08:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:26.361 23:08:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:26.361 23:08:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.361 23:08:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.623 23:08:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.623 23:08:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.623 23:08:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:26.623 23:08:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.623 23:08:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.623 23:08:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.623 23:08:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:26.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:26:26.623 00:26:26.623 --- 10.0.0.2 ping statistics --- 00:26:26.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.623 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:26:26.623 23:08:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.443 ms 00:26:26.623 00:26:26.623 --- 10.0.0.1 ping statistics --- 00:26:26.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.623 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:26:26.623 23:08:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.623 23:08:54 -- nvmf/common.sh@410 -- # return 0 00:26:26.623 23:08:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:26.623 23:08:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.623 23:08:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:26.623 23:08:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:26.623 23:08:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.623 23:08:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:26.623 23:08:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:26.885 23:08:54 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:26.885 23:08:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:26.885 23:08:54 -- common/autotest_common.sh@10 -- # set +x 00:26:26.885 23:08:54 -- host/identify.sh@19 -- # nvmfpid=33302 00:26:26.885 23:08:54 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:26.885 23:08:54 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:26.885 23:08:54 -- host/identify.sh@23 -- # waitforlisten 33302 00:26:26.885 23:08:54 -- common/autotest_common.sh@819 -- # '[' -z 33302 ']' 00:26:26.885 23:08:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.885 23:08:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:26.885 23:08:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.885 23:08:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:26.885 23:08:54 -- common/autotest_common.sh@10 -- # set +x 00:26:26.885 [2024-06-09 23:08:54.873148] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:26.885 [2024-06-09 23:08:54.873209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.885 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.885 [2024-06-09 23:08:54.942723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:26.885 [2024-06-09 23:08:55.016634] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:26.885 [2024-06-09 23:08:55.016767] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.885 [2024-06-09 23:08:55.016777] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.885 [2024-06-09 23:08:55.016785] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.885 [2024-06-09 23:08:55.016896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.885 [2024-06-09 23:08:55.016998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.885 [2024-06-09 23:08:55.017158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.885 [2024-06-09 23:08:55.017158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:27.830 23:08:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:27.830 23:08:55 -- common/autotest_common.sh@852 -- # return 0 00:26:27.830 23:08:55 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:27.830 23:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.830 23:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:27.830 [2024-06-09 23:08:55.657462] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.830 23:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.830 23:08:55 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:27.830 23:08:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:27.830 23:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:27.830 23:08:55 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:27.830 23:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.830 23:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:27.830 Malloc0 00:26:27.830 23:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.830 23:08:55 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:27.830 23:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.830 23:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:27.830 23:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.830 23:08:55 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:27.830 23:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.830 23:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:27.830 23:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.830 23:08:55 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:27.830 23:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.830 23:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:27.830 [2024-06-09 23:08:55.744877] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.830 23:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.830 23:08:55 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:27.830 23:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.830 23:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:27.830 23:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.830 23:08:55 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:27.830 23:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.830 23:08:55 -- common/autotest_common.sh@10 -- # set +x 00:26:27.830 [2024-06-09 23:08:55.760718] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:27.830 [ 00:26:27.830 { 00:26:27.830 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:27.830 "subtype": "Discovery", 00:26:27.830 "listen_addresses": [ 00:26:27.830 { 00:26:27.830 "transport": "TCP", 00:26:27.830 "trtype": "TCP", 00:26:27.830 "adrfam": "IPv4", 00:26:27.830 "traddr": "10.0.0.2", 00:26:27.830 "trsvcid": "4420" 00:26:27.830 } 00:26:27.830 ], 00:26:27.830 "allow_any_host": true, 00:26:27.830 "hosts": [] 00:26:27.830 }, 00:26:27.830 { 00:26:27.830 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.830 "subtype": "NVMe", 00:26:27.830 "listen_addresses": [ 00:26:27.830 { 00:26:27.830 "transport": "TCP", 00:26:27.830 "trtype": "TCP", 00:26:27.830 "adrfam": "IPv4", 00:26:27.830 "traddr": "10.0.0.2", 00:26:27.830 "trsvcid": "4420" 00:26:27.830 } 00:26:27.830 ], 00:26:27.830 "allow_any_host": true, 00:26:27.830 "hosts": [], 00:26:27.830 "serial_number": "SPDK00000000000001", 00:26:27.830 "model_number": "SPDK bdev Controller", 00:26:27.830 "max_namespaces": 32, 00:26:27.830 "min_cntlid": 1, 00:26:27.830 "max_cntlid": 65519, 00:26:27.830 "namespaces": [ 00:26:27.830 { 00:26:27.830 "nsid": 1, 00:26:27.830 "bdev_name": "Malloc0", 00:26:27.830 "name": "Malloc0", 00:26:27.830 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:27.830 "eui64": "ABCDEF0123456789", 00:26:27.830 "uuid": "81fbd0fa-a804-4d55-917b-f35feffd90d2" 00:26:27.830 } 00:26:27.830 ] 00:26:27.830 } 00:26:27.830 ] 00:26:27.830 23:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.830 23:08:55 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:27.830 [2024-06-09 23:08:55.797761] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:27.830 [2024-06-09 23:08:55.797826] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid33402 ] 00:26:27.830 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.830 [2024-06-09 23:08:55.831036] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:27.830 [2024-06-09 23:08:55.831082] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:27.830 [2024-06-09 23:08:55.831087] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:27.830 [2024-06-09 23:08:55.831097] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:27.830 [2024-06-09 23:08:55.831104] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:27.830 [2024-06-09 23:08:55.834432] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:27.830 [2024-06-09 23:08:55.834461] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11409e0 0 00:26:27.830 [2024-06-09 23:08:55.842411] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:27.830 [2024-06-09 23:08:55.842424] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:27.830 [2024-06-09 23:08:55.842429] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:27.830 [2024-06-09 23:08:55.842432] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:27.830 [2024-06-09 23:08:55.842470] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.830 [2024-06-09 23:08:55.842476] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.830 [2024-06-09 23:08:55.842480] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11409e0) 00:26:27.830 [2024-06-09 23:08:55.842493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:27.830 [2024-06-09 23:08:55.842511] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8730, cid 0, qid 0 00:26:27.830 [2024-06-09 23:08:55.850414] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.830 [2024-06-09 23:08:55.850423] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.830 [2024-06-09 23:08:55.850427] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.830 [2024-06-09 23:08:55.850432] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8730) on tqpair=0x11409e0 00:26:27.830 [2024-06-09 23:08:55.850444] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:27.830 [2024-06-09 23:08:55.850450] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:27.830 [2024-06-09 23:08:55.850456] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:27.830 [2024-06-09 23:08:55.850471] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.830 [2024-06-09 23:08:55.850474] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.830 [2024-06-09 23:08:55.850478] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11409e0) 00:26:27.830 [2024-06-09 23:08:55.850486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.830 [2024-06-09 23:08:55.850502] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8730, cid 0, qid 0 00:26:27.830 [2024-06-09 23:08:55.850760] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.830 [2024-06-09 23:08:55.850769] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.830 [2024-06-09 23:08:55.850772] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.830 [2024-06-09 23:08:55.850777] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8730) on tqpair=0x11409e0 00:26:27.830 [2024-06-09 23:08:55.850786] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:27.830 [2024-06-09 23:08:55.850794] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:27.830 [2024-06-09 23:08:55.850801] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.830 [2024-06-09 23:08:55.850805] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.830 [2024-06-09 23:08:55.850809] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11409e0) 00:26:27.830 [2024-06-09 23:08:55.850816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.830 [2024-06-09 23:08:55.850828] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8730, cid 0, qid 0 00:26:27.830 [2024-06-09 23:08:55.851076] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.830 [2024-06-09 23:08:55.851083] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.830 [2024-06-09 23:08:55.851086] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.830 [2024-06-09 23:08:55.851090] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8730) on tqpair=0x11409e0 00:26:27.830 [2024-06-09 23:08:55.851096] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:27.830 [2024-06-09 23:08:55.851105] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:27.830 [2024-06-09 23:08:55.851112] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.830 [2024-06-09 23:08:55.851115] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.830 [2024-06-09 23:08:55.851119] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11409e0) 00:26:27.830 [2024-06-09 23:08:55.851126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.830 [2024-06-09 23:08:55.851137] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8730, cid 0, qid 0 00:26:27.830 [2024-06-09 23:08:55.851394] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.830 [2024-06-09 23:08:55.851406] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.830 [2024-06-09 23:08:55.851410] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.830 [2024-06-09 23:08:55.851414] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8730) on tqpair=0x11409e0 00:26:27.830 [2024-06-09 23:08:55.851420] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:27.830 [2024-06-09 23:08:55.851430] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.830 [2024-06-09 23:08:55.851433] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.830 [2024-06-09 23:08:55.851437] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11409e0) 00:26:27.830 [2024-06-09 23:08:55.851444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.830 [2024-06-09 23:08:55.851456] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8730, cid 0, qid 0 00:26:27.830 [2024-06-09 23:08:55.851682] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.831 [2024-06-09 23:08:55.851689] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.831 [2024-06-09 23:08:55.851693] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.851697] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8730) on tqpair=0x11409e0 00:26:27.831 [2024-06-09 23:08:55.851702] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:27.831 [2024-06-09 23:08:55.851707] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:27.831 [2024-06-09 23:08:55.851715] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:27.831 [2024-06-09 23:08:55.851820] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:27.831 [2024-06-09 23:08:55.851825] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:27.831 [2024-06-09 23:08:55.851833] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.851837] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.851840] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11409e0) 00:26:27.831 [2024-06-09 23:08:55.851847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.831 [2024-06-09 23:08:55.851859] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8730, cid 0, qid 0 00:26:27.831 [2024-06-09 23:08:55.852087] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.831 [2024-06-09 23:08:55.852093] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.831 [2024-06-09 23:08:55.852097] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.852100] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8730) on tqpair=0x11409e0 00:26:27.831 [2024-06-09 23:08:55.852106] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:27.831 [2024-06-09 23:08:55.852115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.852119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.852122] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11409e0) 00:26:27.831 [2024-06-09 23:08:55.852129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.831 [2024-06-09 23:08:55.852141] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8730, cid 0, qid 0 00:26:27.831 [2024-06-09 23:08:55.852365] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.831 [2024-06-09 23:08:55.852372] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.831 [2024-06-09 23:08:55.852375] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.852379] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8730) on tqpair=0x11409e0 00:26:27.831 [2024-06-09 23:08:55.852384] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:27.831 [2024-06-09 23:08:55.852388] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:27.831 [2024-06-09 23:08:55.852396] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:27.831 [2024-06-09 23:08:55.852414] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:27.831 [2024-06-09 23:08:55.852425] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.852429] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.852433] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11409e0) 00:26:27.831 [2024-06-09 23:08:55.852440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.831 [2024-06-09 23:08:55.852452] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8730, cid 0, qid 0 00:26:27.831 [2024-06-09 23:08:55.852722] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:27.831 [2024-06-09 23:08:55.852729] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:27.831 [2024-06-09 23:08:55.852733] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.852736] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11409e0): datao=0, datal=4096, cccid=0 00:26:27.831 [2024-06-09 23:08:55.852741] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11a8730) on tqpair(0x11409e0): expected_datao=0, payload_size=4096 00:26:27.831 [2024-06-09 23:08:55.852750] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.852753] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.852926] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.831 [2024-06-09 23:08:55.852932] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.831 [2024-06-09 23:08:55.852936] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.852940] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8730) on tqpair=0x11409e0 00:26:27.831 [2024-06-09 23:08:55.852948] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:27.831 [2024-06-09 23:08:55.852957] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:27.831 [2024-06-09 23:08:55.852962] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:27.831 [2024-06-09 23:08:55.852966] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:27.831 [2024-06-09 23:08:55.852971] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:27.831 [2024-06-09 23:08:55.852975] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:27.831 [2024-06-09 23:08:55.852983] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:27.831 [2024-06-09 23:08:55.852990] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.852994] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.852998] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11409e0) 00:26:27.831 [2024-06-09 23:08:55.853005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:27.831 [2024-06-09 23:08:55.853017] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8730, cid 0, qid 0 00:26:27.831 [2024-06-09 23:08:55.853268] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.831 [2024-06-09 23:08:55.853275] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.831 [2024-06-09 23:08:55.853279] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.853283] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8730) on tqpair=0x11409e0 00:26:27.831 [2024-06-09 23:08:55.853291] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.853297] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.853301] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11409e0) 00:26:27.831 [2024-06-09 23:08:55.853307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.831 [2024-06-09 23:08:55.853313] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.853317] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.853320] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11409e0) 00:26:27.831 [2024-06-09 23:08:55.853326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.831 [2024-06-09 23:08:55.853332] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.853335] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.853339] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11409e0) 00:26:27.831 [2024-06-09 23:08:55.853344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.831 [2024-06-09 23:08:55.853350] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.853354] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.853357] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11409e0) 00:26:27.831 [2024-06-09 23:08:55.853363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.831 [2024-06-09 23:08:55.853367] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:27.831 [2024-06-09 23:08:55.853378] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:27.831 [2024-06-09 23:08:55.853385] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.853388] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.853392] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11409e0) 00:26:27.831 [2024-06-09 23:08:55.853398] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.831 [2024-06-09 23:08:55.853417] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8730, cid 0, qid 0 00:26:27.831 [2024-06-09 23:08:55.853422] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8890, cid 1, qid 0 00:26:27.831 [2024-06-09 23:08:55.853426] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a89f0, cid 2, qid 0 00:26:27.831 [2024-06-09 23:08:55.853431] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8b50, cid 3, qid 0 00:26:27.831 [2024-06-09 23:08:55.853436] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8cb0, cid 4, qid 0 00:26:27.831 [2024-06-09 23:08:55.853851] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.831 [2024-06-09 23:08:55.853856] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.831 [2024-06-09 23:08:55.853860] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.853864] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8cb0) on tqpair=0x11409e0 00:26:27.831 [2024-06-09 23:08:55.853869] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:27.831 [2024-06-09 23:08:55.853874] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:27.831 [2024-06-09 23:08:55.853885] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.853890] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.853894] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11409e0) 00:26:27.831 [2024-06-09 23:08:55.853900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.831 [2024-06-09 23:08:55.853910] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8cb0, cid 4, qid 0 00:26:27.831 [2024-06-09 23:08:55.854162] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:27.831 [2024-06-09 23:08:55.854170] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:27.831 [2024-06-09 23:08:55.854173] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.854177] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11409e0): datao=0, datal=4096, cccid=4 00:26:27.831 [2024-06-09 23:08:55.854181] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11a8cb0) on tqpair(0x11409e0): expected_datao=0, payload_size=4096 00:26:27.831 [2024-06-09 23:08:55.854259] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.854263] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.898412] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.831 [2024-06-09 23:08:55.898424] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.831 [2024-06-09 23:08:55.898427] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.898431] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8cb0) on tqpair=0x11409e0 00:26:27.831 [2024-06-09 23:08:55.898445] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:27.831 [2024-06-09 23:08:55.898464] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.898468] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.898471] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11409e0) 00:26:27.831 [2024-06-09 23:08:55.898479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.831 [2024-06-09 23:08:55.898485] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.898489] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.898492] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11409e0) 00:26:27.831 [2024-06-09 23:08:55.898498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:27.831 [2024-06-09 23:08:55.898516] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8cb0, cid 4, qid 0 00:26:27.831 [2024-06-09 23:08:55.898522] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8e10, cid 5, qid 0 00:26:27.831 [2024-06-09 23:08:55.898794] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:27.831 [2024-06-09 23:08:55.898801] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:27.831 [2024-06-09 23:08:55.898805] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.898809] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11409e0): datao=0, datal=1024, cccid=4 00:26:27.831 [2024-06-09 23:08:55.898813] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11a8cb0) on tqpair(0x11409e0): expected_datao=0, payload_size=1024 00:26:27.831 [2024-06-09 23:08:55.898820] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.898824] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.898830] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.831 [2024-06-09 23:08:55.898835] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.831 [2024-06-09 23:08:55.898839] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.898849] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8e10) on tqpair=0x11409e0 00:26:27.831 [2024-06-09 23:08:55.940645] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.831 [2024-06-09 23:08:55.940659] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.831 [2024-06-09 23:08:55.940662] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.940666] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8cb0) on tqpair=0x11409e0 00:26:27.831 [2024-06-09 23:08:55.940679] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.940683] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.940686] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11409e0) 00:26:27.831 [2024-06-09 23:08:55.940693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.831 [2024-06-09 23:08:55.940710] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8cb0, cid 4, qid 0 00:26:27.831 [2024-06-09 23:08:55.940969] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:27.831 [2024-06-09 23:08:55.940977] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:27.831 [2024-06-09 23:08:55.940980] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.940984] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11409e0): datao=0, datal=3072, cccid=4 00:26:27.831 [2024-06-09 23:08:55.940989] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11a8cb0) on tqpair(0x11409e0): expected_datao=0, payload_size=3072 00:26:27.831 [2024-06-09 23:08:55.940996] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.941000] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.941198] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.831 [2024-06-09 23:08:55.941205] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.831 [2024-06-09 23:08:55.941208] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.941212] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8cb0) on tqpair=0x11409e0 00:26:27.831 [2024-06-09 23:08:55.941222] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.941226] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.941229] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11409e0) 00:26:27.831 [2024-06-09 23:08:55.941236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.831 [2024-06-09 23:08:55.941251] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8cb0, cid 4, qid 0 00:26:27.831 [2024-06-09 23:08:55.941497] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:27.831 [2024-06-09 23:08:55.941505] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:27.831 [2024-06-09 23:08:55.941508] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.941512] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11409e0): datao=0, datal=8, cccid=4 00:26:27.831 [2024-06-09 23:08:55.941516] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11a8cb0) on tqpair(0x11409e0): expected_datao=0, payload_size=8 00:26:27.831 [2024-06-09 23:08:55.941523] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.941527] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.986412] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.831 [2024-06-09 23:08:55.986421] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.831 [2024-06-09 23:08:55.986425] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.831 [2024-06-09 23:08:55.986429] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8cb0) on tqpair=0x11409e0 00:26:27.831 ===================================================== 00:26:27.831 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:27.831 ===================================================== 00:26:27.831 Controller Capabilities/Features 00:26:27.831 ================================ 00:26:27.831 Vendor ID: 0000 00:26:27.831 Subsystem Vendor ID: 0000 00:26:27.831 Serial Number: .................... 00:26:27.831 Model Number: ........................................ 00:26:27.831 Firmware Version: 24.01.1 00:26:27.831 Recommended Arb Burst: 0 00:26:27.831 IEEE OUI Identifier: 00 00 00 00:26:27.831 Multi-path I/O 00:26:27.831 May have multiple subsystem ports: No 00:26:27.831 May have multiple controllers: No 00:26:27.831 Associated with SR-IOV VF: No 00:26:27.831 Max Data Transfer Size: 131072 00:26:27.831 Max Number of Namespaces: 0 00:26:27.831 Max Number of I/O Queues: 1024 00:26:27.831 NVMe Specification Version (VS): 1.3 00:26:27.831 NVMe Specification Version (Identify): 1.3 00:26:27.831 Maximum Queue Entries: 128 00:26:27.831 Contiguous Queues Required: Yes 00:26:27.831 Arbitration Mechanisms Supported 00:26:27.831 Weighted Round Robin: Not Supported 00:26:27.831 Vendor Specific: Not Supported 00:26:27.831 Reset Timeout: 15000 ms 00:26:27.831 Doorbell Stride: 4 bytes 00:26:27.831 NVM Subsystem Reset: Not Supported 00:26:27.831 Command Sets Supported 00:26:27.831 NVM Command Set: Supported 00:26:27.831 Boot Partition: Not Supported 00:26:27.831 Memory Page Size Minimum: 4096 bytes 00:26:27.832 Memory Page Size Maximum: 4096 bytes 00:26:27.832 Persistent Memory Region: Not Supported 00:26:27.832 Optional Asynchronous Events Supported 00:26:27.832 Namespace Attribute Notices: Not Supported 00:26:27.832 Firmware Activation Notices: Not Supported 00:26:27.832 ANA Change Notices: Not Supported 00:26:27.832 PLE Aggregate Log Change Notices: Not Supported 00:26:27.832 LBA Status Info Alert Notices: Not Supported 00:26:27.832 EGE Aggregate Log Change Notices: Not Supported 00:26:27.832 Normal NVM Subsystem Shutdown event: Not Supported 00:26:27.832 Zone Descriptor Change Notices: Not Supported 00:26:27.832 Discovery Log Change Notices: Supported 00:26:27.832 Controller Attributes 00:26:27.832 128-bit Host Identifier: Not Supported 00:26:27.832 Non-Operational Permissive Mode: Not Supported 00:26:27.832 NVM Sets: Not Supported 00:26:27.832 Read Recovery Levels: Not Supported 00:26:27.832 Endurance Groups: Not Supported 00:26:27.832 Predictable Latency Mode: Not Supported 00:26:27.832 Traffic Based Keep ALive: Not Supported 00:26:27.832 Namespace Granularity: Not Supported 00:26:27.832 SQ Associations: Not Supported 00:26:27.832 UUID List: Not Supported 00:26:27.832 Multi-Domain Subsystem: Not Supported 00:26:27.832 Fixed Capacity Management: Not Supported 00:26:27.832 Variable Capacity Management: Not Supported 00:26:27.832 Delete Endurance Group: Not Supported 00:26:27.832 Delete NVM Set: Not Supported 00:26:27.832 Extended LBA Formats Supported: Not Supported 00:26:27.832 Flexible Data Placement Supported: Not Supported 00:26:27.832 00:26:27.832 Controller Memory Buffer Support 00:26:27.832 ================================ 00:26:27.832 Supported: No 00:26:27.832 00:26:27.832 Persistent Memory Region Support 00:26:27.832 ================================ 00:26:27.832 Supported: No 00:26:27.832 00:26:27.832 Admin Command Set Attributes 00:26:27.832 ============================ 00:26:27.832 Security Send/Receive: Not Supported 00:26:27.832 Format NVM: Not Supported 00:26:27.832 Firmware Activate/Download: Not Supported 00:26:27.832 Namespace Management: Not Supported 00:26:27.832 Device Self-Test: Not Supported 00:26:27.832 Directives: Not Supported 00:26:27.832 NVMe-MI: Not Supported 00:26:27.832 Virtualization Management: Not Supported 00:26:27.832 Doorbell Buffer Config: Not Supported 00:26:27.832 Get LBA Status Capability: Not Supported 00:26:27.832 Command & Feature Lockdown Capability: Not Supported 00:26:27.832 Abort Command Limit: 1 00:26:27.832 Async Event Request Limit: 4 00:26:27.832 Number of Firmware Slots: N/A 00:26:27.832 Firmware Slot 1 Read-Only: N/A 00:26:27.832 Firmware Activation Without Reset: N/A 00:26:27.832 Multiple Update Detection Support: N/A 00:26:27.832 Firmware Update Granularity: No Information Provided 00:26:27.832 Per-Namespace SMART Log: No 00:26:27.832 Asymmetric Namespace Access Log Page: Not Supported 00:26:27.832 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:27.832 Command Effects Log Page: Not Supported 00:26:27.832 Get Log Page Extended Data: Supported 00:26:27.832 Telemetry Log Pages: Not Supported 00:26:27.832 Persistent Event Log Pages: Not Supported 00:26:27.832 Supported Log Pages Log Page: May Support 00:26:27.832 Commands Supported & Effects Log Page: Not Supported 00:26:27.832 Feature Identifiers & Effects Log Page:May Support 00:26:27.832 NVMe-MI Commands & Effects Log Page: May Support 00:26:27.832 Data Area 4 for Telemetry Log: Not Supported 00:26:27.832 Error Log Page Entries Supported: 128 00:26:27.832 Keep Alive: Not Supported 00:26:27.832 00:26:27.832 NVM Command Set Attributes 00:26:27.832 ========================== 00:26:27.832 Submission Queue Entry Size 00:26:27.832 Max: 1 00:26:27.832 Min: 1 00:26:27.832 Completion Queue Entry Size 00:26:27.832 Max: 1 00:26:27.832 Min: 1 00:26:27.832 Number of Namespaces: 0 00:26:27.832 Compare Command: Not Supported 00:26:27.832 Write Uncorrectable Command: Not Supported 00:26:27.832 Dataset Management Command: Not Supported 00:26:27.832 Write Zeroes Command: Not Supported 00:26:27.832 Set Features Save Field: Not Supported 00:26:27.832 Reservations: Not Supported 00:26:27.832 Timestamp: Not Supported 00:26:27.832 Copy: Not Supported 00:26:27.832 Volatile Write Cache: Not Present 00:26:27.832 Atomic Write Unit (Normal): 1 00:26:27.832 Atomic Write Unit (PFail): 1 00:26:27.832 Atomic Compare & Write Unit: 1 00:26:27.832 Fused Compare & Write: Supported 00:26:27.832 Scatter-Gather List 00:26:27.832 SGL Command Set: Supported 00:26:27.832 SGL Keyed: Supported 00:26:27.832 SGL Bit Bucket Descriptor: Not Supported 00:26:27.832 SGL Metadata Pointer: Not Supported 00:26:27.832 Oversized SGL: Not Supported 00:26:27.832 SGL Metadata Address: Not Supported 00:26:27.832 SGL Offset: Supported 00:26:27.832 Transport SGL Data Block: Not Supported 00:26:27.832 Replay Protected Memory Block: Not Supported 00:26:27.832 00:26:27.832 Firmware Slot Information 00:26:27.832 ========================= 00:26:27.832 Active slot: 0 00:26:27.832 00:26:27.832 00:26:27.832 Error Log 00:26:27.832 ========= 00:26:27.832 00:26:27.832 Active Namespaces 00:26:27.832 ================= 00:26:27.832 Discovery Log Page 00:26:27.832 ================== 00:26:27.832 Generation Counter: 2 00:26:27.832 Number of Records: 2 00:26:27.832 Record Format: 0 00:26:27.832 00:26:27.832 Discovery Log Entry 0 00:26:27.832 ---------------------- 00:26:27.832 Transport Type: 3 (TCP) 00:26:27.832 Address Family: 1 (IPv4) 00:26:27.832 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:27.832 Entry Flags: 00:26:27.832 Duplicate Returned Information: 1 00:26:27.832 Explicit Persistent Connection Support for Discovery: 1 00:26:27.832 Transport Requirements: 00:26:27.832 Secure Channel: Not Required 00:26:27.832 Port ID: 0 (0x0000) 00:26:27.832 Controller ID: 65535 (0xffff) 00:26:27.832 Admin Max SQ Size: 128 00:26:27.832 Transport Service Identifier: 4420 00:26:27.832 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:27.832 Transport Address: 10.0.0.2 00:26:27.832 Discovery Log Entry 1 00:26:27.832 ---------------------- 00:26:27.832 Transport Type: 3 (TCP) 00:26:27.832 Address Family: 1 (IPv4) 00:26:27.832 Subsystem Type: 2 (NVM Subsystem) 00:26:27.832 Entry Flags: 00:26:27.832 Duplicate Returned Information: 0 00:26:27.832 Explicit Persistent Connection Support for Discovery: 0 00:26:27.832 Transport Requirements: 00:26:27.832 Secure Channel: Not Required 00:26:27.832 Port ID: 0 (0x0000) 00:26:27.832 Controller ID: 65535 (0xffff) 00:26:27.832 Admin Max SQ Size: 128 00:26:27.832 Transport Service Identifier: 4420 00:26:27.832 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:27.832 Transport Address: 10.0.0.2 [2024-06-09 23:08:55.986519] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:27.832 [2024-06-09 23:08:55.986532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.832 [2024-06-09 23:08:55.986539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.832 [2024-06-09 23:08:55.986545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.832 [2024-06-09 23:08:55.986551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.832 [2024-06-09 23:08:55.986560] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.986564] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.986568] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11409e0) 00:26:27.832 [2024-06-09 23:08:55.986575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.832 [2024-06-09 23:08:55.986588] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8b50, cid 3, qid 0 00:26:27.832 [2024-06-09 23:08:55.986857] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.832 [2024-06-09 23:08:55.986865] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.832 [2024-06-09 23:08:55.986868] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.986872] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8b50) on tqpair=0x11409e0 00:26:27.832 [2024-06-09 23:08:55.986880] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.986884] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.986887] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11409e0) 00:26:27.832 [2024-06-09 23:08:55.986894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.832 [2024-06-09 23:08:55.986909] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8b50, cid 3, qid 0 00:26:27.832 [2024-06-09 23:08:55.987145] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.832 [2024-06-09 23:08:55.987151] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.832 [2024-06-09 23:08:55.987155] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.987158] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8b50) on tqpair=0x11409e0 00:26:27.832 [2024-06-09 23:08:55.987164] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:27.832 [2024-06-09 23:08:55.987168] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:27.832 [2024-06-09 23:08:55.987177] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.987181] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.987184] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11409e0) 00:26:27.832 [2024-06-09 23:08:55.987191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.832 [2024-06-09 23:08:55.987202] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8b50, cid 3, qid 0 00:26:27.832 [2024-06-09 23:08:55.987436] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.832 [2024-06-09 23:08:55.987444] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.832 [2024-06-09 23:08:55.987447] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.987451] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8b50) on tqpair=0x11409e0 00:26:27.832 [2024-06-09 23:08:55.987465] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.987469] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.987473] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11409e0) 00:26:27.832 [2024-06-09 23:08:55.987480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.832 [2024-06-09 23:08:55.987491] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8b50, cid 3, qid 0 00:26:27.832 [2024-06-09 23:08:55.987733] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.832 [2024-06-09 23:08:55.987739] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.832 [2024-06-09 23:08:55.987743] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.987746] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8b50) on tqpair=0x11409e0 00:26:27.832 [2024-06-09 23:08:55.987757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.987761] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.987764] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11409e0) 00:26:27.832 [2024-06-09 23:08:55.987771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.832 [2024-06-09 23:08:55.987782] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8b50, cid 3, qid 0 00:26:27.832 [2024-06-09 23:08:55.988007] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.832 [2024-06-09 23:08:55.988014] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.832 [2024-06-09 23:08:55.988018] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.988021] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8b50) on tqpair=0x11409e0 00:26:27.832 [2024-06-09 23:08:55.988031] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.988035] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.988039] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11409e0) 00:26:27.832 [2024-06-09 23:08:55.988046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.832 [2024-06-09 23:08:55.988056] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8b50, cid 3, qid 0 00:26:27.832 [2024-06-09 23:08:55.988279] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.832 [2024-06-09 23:08:55.988285] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.832 [2024-06-09 23:08:55.988289] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.988292] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8b50) on tqpair=0x11409e0 00:26:27.832 [2024-06-09 23:08:55.988303] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.988306] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.988310] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11409e0) 00:26:27.832 [2024-06-09 23:08:55.988317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.832 [2024-06-09 23:08:55.988327] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8b50, cid 3, qid 0 00:26:27.832 [2024-06-09 23:08:55.988551] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.832 [2024-06-09 23:08:55.988558] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.832 [2024-06-09 23:08:55.988562] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.988565] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8b50) on tqpair=0x11409e0 00:26:27.832 [2024-06-09 23:08:55.988579] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.988583] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.988586] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11409e0) 00:26:27.832 [2024-06-09 23:08:55.988593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.832 [2024-06-09 23:08:55.988604] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8b50, cid 3, qid 0 00:26:27.832 [2024-06-09 23:08:55.988845] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.832 [2024-06-09 23:08:55.988851] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.832 [2024-06-09 23:08:55.988855] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.988858] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8b50) on tqpair=0x11409e0 00:26:27.832 [2024-06-09 23:08:55.988869] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.988873] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.832 [2024-06-09 23:08:55.988876] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11409e0) 00:26:27.832 [2024-06-09 23:08:55.988883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.833 [2024-06-09 23:08:55.988894] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8b50, cid 3, qid 0 00:26:27.833 [2024-06-09 23:08:55.989137] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.833 [2024-06-09 23:08:55.989144] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.833 [2024-06-09 23:08:55.989147] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.989151] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8b50) on tqpair=0x11409e0 00:26:27.833 [2024-06-09 23:08:55.989161] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.989165] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.989168] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11409e0) 00:26:27.833 [2024-06-09 23:08:55.989175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.833 [2024-06-09 23:08:55.989185] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8b50, cid 3, qid 0 00:26:27.833 [2024-06-09 23:08:55.989422] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.833 [2024-06-09 23:08:55.989429] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.833 [2024-06-09 23:08:55.989432] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.989436] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8b50) on tqpair=0x11409e0 00:26:27.833 [2024-06-09 23:08:55.989447] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.989450] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.989454] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11409e0) 00:26:27.833 [2024-06-09 23:08:55.989461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.833 [2024-06-09 23:08:55.989472] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8b50, cid 3, qid 0 00:26:27.833 [2024-06-09 23:08:55.989731] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.833 [2024-06-09 23:08:55.989740] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.833 [2024-06-09 23:08:55.989744] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.989747] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8b50) on tqpair=0x11409e0 00:26:27.833 [2024-06-09 23:08:55.989757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.989764] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.989769] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11409e0) 00:26:27.833 [2024-06-09 23:08:55.989776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.833 [2024-06-09 23:08:55.989787] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8b50, cid 3, qid 0 00:26:27.833 [2024-06-09 23:08:55.990009] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.833 [2024-06-09 23:08:55.990016] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.833 [2024-06-09 23:08:55.990019] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.990024] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8b50) on tqpair=0x11409e0 00:26:27.833 [2024-06-09 23:08:55.990036] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.990040] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.990043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11409e0) 00:26:27.833 [2024-06-09 23:08:55.990050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.833 [2024-06-09 23:08:55.990061] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8b50, cid 3, qid 0 00:26:27.833 [2024-06-09 23:08:55.990321] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.833 [2024-06-09 23:08:55.990328] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.833 [2024-06-09 23:08:55.990332] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.990336] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8b50) on tqpair=0x11409e0 00:26:27.833 [2024-06-09 23:08:55.990346] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.990350] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.990354] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11409e0) 00:26:27.833 [2024-06-09 23:08:55.990360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.833 [2024-06-09 23:08:55.990371] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11a8b50, cid 3, qid 0 00:26:27.833 [2024-06-09 23:08:55.994410] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:27.833 [2024-06-09 23:08:55.994418] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:27.833 [2024-06-09 23:08:55.994422] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:27.833 [2024-06-09 23:08:55.994425] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11a8b50) on tqpair=0x11409e0 00:26:27.833 [2024-06-09 23:08:55.994433] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:26:27.833 00:26:27.833 23:08:56 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:28.096 [2024-06-09 23:08:56.031865] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:28.096 [2024-06-09 23:08:56.031930] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid33510 ] 00:26:28.096 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.096 [2024-06-09 23:08:56.064929] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:28.096 [2024-06-09 23:08:56.064973] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:28.096 [2024-06-09 23:08:56.064979] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:28.096 [2024-06-09 23:08:56.064990] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:28.096 [2024-06-09 23:08:56.064997] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:28.096 [2024-06-09 23:08:56.068443] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:28.096 [2024-06-09 23:08:56.068477] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b2d9e0 0 00:26:28.096 [2024-06-09 23:08:56.076413] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:28.096 [2024-06-09 23:08:56.076428] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:28.096 [2024-06-09 23:08:56.076432] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:28.096 [2024-06-09 23:08:56.076435] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:28.096 [2024-06-09 23:08:56.076471] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.076477] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.076481] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b2d9e0) 00:26:28.096 [2024-06-09 23:08:56.076492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:28.096 [2024-06-09 23:08:56.076508] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95730, cid 0, qid 0 00:26:28.096 [2024-06-09 23:08:56.083414] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.096 [2024-06-09 23:08:56.083428] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.096 [2024-06-09 23:08:56.083431] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.083436] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95730) on tqpair=0x1b2d9e0 00:26:28.096 [2024-06-09 23:08:56.083450] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:28.096 [2024-06-09 23:08:56.083459] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:28.096 [2024-06-09 23:08:56.083464] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:28.096 [2024-06-09 23:08:56.083478] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.083482] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.083486] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b2d9e0) 00:26:28.096 [2024-06-09 23:08:56.083494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.096 [2024-06-09 23:08:56.083511] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95730, cid 0, qid 0 00:26:28.096 [2024-06-09 23:08:56.083764] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.096 [2024-06-09 23:08:56.083773] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.096 [2024-06-09 23:08:56.083776] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.083780] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95730) on tqpair=0x1b2d9e0 00:26:28.096 [2024-06-09 23:08:56.083789] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:28.096 [2024-06-09 23:08:56.083797] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:28.096 [2024-06-09 23:08:56.083804] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.083807] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.083811] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b2d9e0) 00:26:28.096 [2024-06-09 23:08:56.083824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.096 [2024-06-09 23:08:56.083837] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95730, cid 0, qid 0 00:26:28.096 [2024-06-09 23:08:56.084087] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.096 [2024-06-09 23:08:56.084097] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.096 [2024-06-09 23:08:56.084100] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.084104] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95730) on tqpair=0x1b2d9e0 00:26:28.096 [2024-06-09 23:08:56.084110] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:28.096 [2024-06-09 23:08:56.084118] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:28.096 [2024-06-09 23:08:56.084125] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.084129] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.084132] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b2d9e0) 00:26:28.096 [2024-06-09 23:08:56.084141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.096 [2024-06-09 23:08:56.084154] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95730, cid 0, qid 0 00:26:28.096 [2024-06-09 23:08:56.084363] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.096 [2024-06-09 23:08:56.084370] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.096 [2024-06-09 23:08:56.084373] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.084377] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95730) on tqpair=0x1b2d9e0 00:26:28.096 [2024-06-09 23:08:56.084383] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:28.096 [2024-06-09 23:08:56.084393] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.084400] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.084412] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b2d9e0) 00:26:28.096 [2024-06-09 23:08:56.084419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.096 [2024-06-09 23:08:56.084431] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95730, cid 0, qid 0 00:26:28.096 [2024-06-09 23:08:56.084728] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.096 [2024-06-09 23:08:56.084737] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.096 [2024-06-09 23:08:56.084740] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.084744] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95730) on tqpair=0x1b2d9e0 00:26:28.096 [2024-06-09 23:08:56.084749] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:28.096 [2024-06-09 23:08:56.084754] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:28.096 [2024-06-09 23:08:56.084762] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:28.096 [2024-06-09 23:08:56.084867] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:28.096 [2024-06-09 23:08:56.084871] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:28.096 [2024-06-09 23:08:56.084882] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.084886] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.084889] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b2d9e0) 00:26:28.096 [2024-06-09 23:08:56.084896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.096 [2024-06-09 23:08:56.084908] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95730, cid 0, qid 0 00:26:28.096 [2024-06-09 23:08:56.085124] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.096 [2024-06-09 23:08:56.085133] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.096 [2024-06-09 23:08:56.085136] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.085140] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95730) on tqpair=0x1b2d9e0 00:26:28.096 [2024-06-09 23:08:56.085145] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:28.096 [2024-06-09 23:08:56.085155] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.085159] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.085162] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b2d9e0) 00:26:28.096 [2024-06-09 23:08:56.085169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.096 [2024-06-09 23:08:56.085183] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95730, cid 0, qid 0 00:26:28.096 [2024-06-09 23:08:56.085435] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.096 [2024-06-09 23:08:56.085444] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.096 [2024-06-09 23:08:56.085448] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.085451] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95730) on tqpair=0x1b2d9e0 00:26:28.096 [2024-06-09 23:08:56.085457] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:28.096 [2024-06-09 23:08:56.085461] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:28.096 [2024-06-09 23:08:56.085469] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:28.096 [2024-06-09 23:08:56.085478] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:28.096 [2024-06-09 23:08:56.085488] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.085493] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.085496] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b2d9e0) 00:26:28.096 [2024-06-09 23:08:56.085503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.096 [2024-06-09 23:08:56.085515] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95730, cid 0, qid 0 00:26:28.096 [2024-06-09 23:08:56.085842] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:28.096 [2024-06-09 23:08:56.085852] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:28.096 [2024-06-09 23:08:56.085855] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.085859] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b2d9e0): datao=0, datal=4096, cccid=0 00:26:28.096 [2024-06-09 23:08:56.085863] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b95730) on tqpair(0x1b2d9e0): expected_datao=0, payload_size=4096 00:26:28.096 [2024-06-09 23:08:56.085871] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.085878] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.086079] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.096 [2024-06-09 23:08:56.086087] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.096 [2024-06-09 23:08:56.086091] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.086094] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95730) on tqpair=0x1b2d9e0 00:26:28.096 [2024-06-09 23:08:56.086103] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:28.096 [2024-06-09 23:08:56.086111] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:28.096 [2024-06-09 23:08:56.086115] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:28.096 [2024-06-09 23:08:56.086119] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:28.096 [2024-06-09 23:08:56.086124] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:28.096 [2024-06-09 23:08:56.086128] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:28.096 [2024-06-09 23:08:56.086137] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:28.096 [2024-06-09 23:08:56.086147] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.086151] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.086154] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b2d9e0) 00:26:28.096 [2024-06-09 23:08:56.086162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:28.096 [2024-06-09 23:08:56.086174] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95730, cid 0, qid 0 00:26:28.096 [2024-06-09 23:08:56.086438] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.096 [2024-06-09 23:08:56.086447] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.096 [2024-06-09 23:08:56.086451] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.086455] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95730) on tqpair=0x1b2d9e0 00:26:28.096 [2024-06-09 23:08:56.086462] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.086466] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.086469] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b2d9e0) 00:26:28.096 [2024-06-09 23:08:56.086475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.096 [2024-06-09 23:08:56.086481] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.086485] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.086488] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b2d9e0) 00:26:28.096 [2024-06-09 23:08:56.086494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.096 [2024-06-09 23:08:56.086500] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.086503] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.086507] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b2d9e0) 00:26:28.096 [2024-06-09 23:08:56.086512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.096 [2024-06-09 23:08:56.086518] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.086524] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.086528] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b2d9e0) 00:26:28.096 [2024-06-09 23:08:56.086534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.096 [2024-06-09 23:08:56.086538] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:28.096 [2024-06-09 23:08:56.086550] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:28.096 [2024-06-09 23:08:56.086559] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.086562] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.096 [2024-06-09 23:08:56.086565] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b2d9e0) 00:26:28.096 [2024-06-09 23:08:56.086572] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.096 [2024-06-09 23:08:56.086586] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95730, cid 0, qid 0 00:26:28.096 [2024-06-09 23:08:56.086591] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95890, cid 1, qid 0 00:26:28.096 [2024-06-09 23:08:56.086595] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b959f0, cid 2, qid 0 00:26:28.096 [2024-06-09 23:08:56.086600] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95b50, cid 3, qid 0 00:26:28.096 [2024-06-09 23:08:56.086604] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95cb0, cid 4, qid 0 00:26:28.097 [2024-06-09 23:08:56.086882] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.097 [2024-06-09 23:08:56.086891] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.097 [2024-06-09 23:08:56.086895] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.086899] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95cb0) on tqpair=0x1b2d9e0 00:26:28.097 [2024-06-09 23:08:56.086904] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:28.097 [2024-06-09 23:08:56.086909] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:28.097 [2024-06-09 23:08:56.086917] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:28.097 [2024-06-09 23:08:56.086923] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:28.097 [2024-06-09 23:08:56.086929] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.086935] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.086941] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b2d9e0) 00:26:28.097 [2024-06-09 23:08:56.086950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:28.097 [2024-06-09 23:08:56.086963] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95cb0, cid 4, qid 0 00:26:28.097 [2024-06-09 23:08:56.087260] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.097 [2024-06-09 23:08:56.087268] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.097 [2024-06-09 23:08:56.087271] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.087275] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95cb0) on tqpair=0x1b2d9e0 00:26:28.097 [2024-06-09 23:08:56.087326] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:28.097 [2024-06-09 23:08:56.087336] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:28.097 [2024-06-09 23:08:56.087348] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.087352] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.087355] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b2d9e0) 00:26:28.097 [2024-06-09 23:08:56.087361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.097 [2024-06-09 23:08:56.087374] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95cb0, cid 4, qid 0 00:26:28.097 [2024-06-09 23:08:56.091414] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:28.097 [2024-06-09 23:08:56.091426] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:28.097 [2024-06-09 23:08:56.091430] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.091434] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b2d9e0): datao=0, datal=4096, cccid=4 00:26:28.097 [2024-06-09 23:08:56.091438] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b95cb0) on tqpair(0x1b2d9e0): expected_datao=0, payload_size=4096 00:26:28.097 [2024-06-09 23:08:56.091446] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.091450] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.091455] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.097 [2024-06-09 23:08:56.091461] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.097 [2024-06-09 23:08:56.091464] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.091468] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95cb0) on tqpair=0x1b2d9e0 00:26:28.097 [2024-06-09 23:08:56.091479] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:28.097 [2024-06-09 23:08:56.091495] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:28.097 [2024-06-09 23:08:56.091505] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:28.097 [2024-06-09 23:08:56.091515] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.091519] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.091522] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b2d9e0) 00:26:28.097 [2024-06-09 23:08:56.091529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.097 [2024-06-09 23:08:56.091543] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95cb0, cid 4, qid 0 00:26:28.097 [2024-06-09 23:08:56.091774] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:28.097 [2024-06-09 23:08:56.091786] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:28.097 [2024-06-09 23:08:56.091793] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.091799] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b2d9e0): datao=0, datal=4096, cccid=4 00:26:28.097 [2024-06-09 23:08:56.091807] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b95cb0) on tqpair(0x1b2d9e0): expected_datao=0, payload_size=4096 00:26:28.097 [2024-06-09 23:08:56.091891] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.091898] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.092122] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.097 [2024-06-09 23:08:56.092130] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.097 [2024-06-09 23:08:56.092134] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.092141] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95cb0) on tqpair=0x1b2d9e0 00:26:28.097 [2024-06-09 23:08:56.092155] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:28.097 [2024-06-09 23:08:56.092165] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:28.097 [2024-06-09 23:08:56.092173] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.092179] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.092182] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b2d9e0) 00:26:28.097 [2024-06-09 23:08:56.092189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.097 [2024-06-09 23:08:56.092201] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95cb0, cid 4, qid 0 00:26:28.097 [2024-06-09 23:08:56.092459] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:28.097 [2024-06-09 23:08:56.092471] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:28.097 [2024-06-09 23:08:56.092477] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.092484] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b2d9e0): datao=0, datal=4096, cccid=4 00:26:28.097 [2024-06-09 23:08:56.092491] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b95cb0) on tqpair(0x1b2d9e0): expected_datao=0, payload_size=4096 00:26:28.097 [2024-06-09 23:08:56.092575] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.092582] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.092796] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.097 [2024-06-09 23:08:56.092805] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.097 [2024-06-09 23:08:56.092808] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.092812] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95cb0) on tqpair=0x1b2d9e0 00:26:28.097 [2024-06-09 23:08:56.092821] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:28.097 [2024-06-09 23:08:56.092828] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:28.097 [2024-06-09 23:08:56.092838] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:28.097 [2024-06-09 23:08:56.092843] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:28.097 [2024-06-09 23:08:56.092852] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:28.097 [2024-06-09 23:08:56.092857] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:28.097 [2024-06-09 23:08:56.092862] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:28.097 [2024-06-09 23:08:56.092867] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:28.097 [2024-06-09 23:08:56.092881] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.092885] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.092888] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b2d9e0) 00:26:28.097 [2024-06-09 23:08:56.092895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.097 [2024-06-09 23:08:56.092901] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.092907] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.092911] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b2d9e0) 00:26:28.097 [2024-06-09 23:08:56.092917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.097 [2024-06-09 23:08:56.092932] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95cb0, cid 4, qid 0 00:26:28.097 [2024-06-09 23:08:56.092937] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95e10, cid 5, qid 0 00:26:28.097 [2024-06-09 23:08:56.093207] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.097 [2024-06-09 23:08:56.093216] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.097 [2024-06-09 23:08:56.093219] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.093223] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95cb0) on tqpair=0x1b2d9e0 00:26:28.097 [2024-06-09 23:08:56.093231] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.097 [2024-06-09 23:08:56.093236] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.097 [2024-06-09 23:08:56.093240] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.093243] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95e10) on tqpair=0x1b2d9e0 00:26:28.097 [2024-06-09 23:08:56.093253] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.093261] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.093265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b2d9e0) 00:26:28.097 [2024-06-09 23:08:56.093271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.097 [2024-06-09 23:08:56.093282] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95e10, cid 5, qid 0 00:26:28.097 [2024-06-09 23:08:56.093557] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.097 [2024-06-09 23:08:56.093566] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.097 [2024-06-09 23:08:56.093569] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.093573] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95e10) on tqpair=0x1b2d9e0 00:26:28.097 [2024-06-09 23:08:56.093583] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.093590] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.093594] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b2d9e0) 00:26:28.097 [2024-06-09 23:08:56.093600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.097 [2024-06-09 23:08:56.093611] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95e10, cid 5, qid 0 00:26:28.097 [2024-06-09 23:08:56.093871] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.097 [2024-06-09 23:08:56.093878] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.097 [2024-06-09 23:08:56.093881] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.093885] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95e10) on tqpair=0x1b2d9e0 00:26:28.097 [2024-06-09 23:08:56.093895] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.093901] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.093905] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b2d9e0) 00:26:28.097 [2024-06-09 23:08:56.093911] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.097 [2024-06-09 23:08:56.093922] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95e10, cid 5, qid 0 00:26:28.097 [2024-06-09 23:08:56.094147] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.097 [2024-06-09 23:08:56.094156] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.097 [2024-06-09 23:08:56.094160] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094163] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95e10) on tqpair=0x1b2d9e0 00:26:28.097 [2024-06-09 23:08:56.094176] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094180] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094184] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b2d9e0) 00:26:28.097 [2024-06-09 23:08:56.094190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.097 [2024-06-09 23:08:56.094198] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094204] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094208] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b2d9e0) 00:26:28.097 [2024-06-09 23:08:56.094214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.097 [2024-06-09 23:08:56.094221] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094224] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094227] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1b2d9e0) 00:26:28.097 [2024-06-09 23:08:56.094233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.097 [2024-06-09 23:08:56.094240] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094244] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094247] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b2d9e0) 00:26:28.097 [2024-06-09 23:08:56.094253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.097 [2024-06-09 23:08:56.094266] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95e10, cid 5, qid 0 00:26:28.097 [2024-06-09 23:08:56.094271] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95cb0, cid 4, qid 0 00:26:28.097 [2024-06-09 23:08:56.094275] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95f70, cid 6, qid 0 00:26:28.097 [2024-06-09 23:08:56.094280] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b960d0, cid 7, qid 0 00:26:28.097 [2024-06-09 23:08:56.094632] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:28.097 [2024-06-09 23:08:56.094644] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:28.097 [2024-06-09 23:08:56.094650] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094656] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b2d9e0): datao=0, datal=8192, cccid=5 00:26:28.097 [2024-06-09 23:08:56.094664] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b95e10) on tqpair(0x1b2d9e0): expected_datao=0, payload_size=8192 00:26:28.097 [2024-06-09 23:08:56.094859] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094866] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094871] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:28.097 [2024-06-09 23:08:56.094877] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:28.097 [2024-06-09 23:08:56.094881] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094884] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b2d9e0): datao=0, datal=512, cccid=4 00:26:28.097 [2024-06-09 23:08:56.094892] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b95cb0) on tqpair(0x1b2d9e0): expected_datao=0, payload_size=512 00:26:28.097 [2024-06-09 23:08:56.094899] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094902] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094908] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:28.097 [2024-06-09 23:08:56.094913] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:28.097 [2024-06-09 23:08:56.094917] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094920] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b2d9e0): datao=0, datal=512, cccid=6 00:26:28.097 [2024-06-09 23:08:56.094924] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b95f70) on tqpair(0x1b2d9e0): expected_datao=0, payload_size=512 00:26:28.097 [2024-06-09 23:08:56.094931] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094935] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094940] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:28.097 [2024-06-09 23:08:56.094946] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:28.097 [2024-06-09 23:08:56.094949] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094952] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b2d9e0): datao=0, datal=4096, cccid=7 00:26:28.097 [2024-06-09 23:08:56.094956] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b960d0) on tqpair(0x1b2d9e0): expected_datao=0, payload_size=4096 00:26:28.097 [2024-06-09 23:08:56.094964] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.094967] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.095056] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.097 [2024-06-09 23:08:56.095063] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.097 [2024-06-09 23:08:56.095066] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.095070] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95e10) on tqpair=0x1b2d9e0 00:26:28.097 [2024-06-09 23:08:56.095085] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.097 [2024-06-09 23:08:56.095095] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.097 [2024-06-09 23:08:56.095099] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.095102] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95cb0) on tqpair=0x1b2d9e0 00:26:28.097 [2024-06-09 23:08:56.095111] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.097 [2024-06-09 23:08:56.095117] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.097 [2024-06-09 23:08:56.095121] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.095124] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95f70) on tqpair=0x1b2d9e0 00:26:28.097 [2024-06-09 23:08:56.095132] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.097 [2024-06-09 23:08:56.095138] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.097 [2024-06-09 23:08:56.095141] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.097 [2024-06-09 23:08:56.095145] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b960d0) on tqpair=0x1b2d9e0 00:26:28.097 ===================================================== 00:26:28.097 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:28.097 ===================================================== 00:26:28.097 Controller Capabilities/Features 00:26:28.097 ================================ 00:26:28.097 Vendor ID: 8086 00:26:28.097 Subsystem Vendor ID: 8086 00:26:28.097 Serial Number: SPDK00000000000001 00:26:28.097 Model Number: SPDK bdev Controller 00:26:28.097 Firmware Version: 24.01.1 00:26:28.097 Recommended Arb Burst: 6 00:26:28.097 IEEE OUI Identifier: e4 d2 5c 00:26:28.097 Multi-path I/O 00:26:28.097 May have multiple subsystem ports: Yes 00:26:28.097 May have multiple controllers: Yes 00:26:28.097 Associated with SR-IOV VF: No 00:26:28.097 Max Data Transfer Size: 131072 00:26:28.097 Max Number of Namespaces: 32 00:26:28.097 Max Number of I/O Queues: 127 00:26:28.097 NVMe Specification Version (VS): 1.3 00:26:28.097 NVMe Specification Version (Identify): 1.3 00:26:28.097 Maximum Queue Entries: 128 00:26:28.097 Contiguous Queues Required: Yes 00:26:28.097 Arbitration Mechanisms Supported 00:26:28.097 Weighted Round Robin: Not Supported 00:26:28.097 Vendor Specific: Not Supported 00:26:28.097 Reset Timeout: 15000 ms 00:26:28.097 Doorbell Stride: 4 bytes 00:26:28.097 NVM Subsystem Reset: Not Supported 00:26:28.097 Command Sets Supported 00:26:28.097 NVM Command Set: Supported 00:26:28.097 Boot Partition: Not Supported 00:26:28.097 Memory Page Size Minimum: 4096 bytes 00:26:28.097 Memory Page Size Maximum: 4096 bytes 00:26:28.097 Persistent Memory Region: Not Supported 00:26:28.097 Optional Asynchronous Events Supported 00:26:28.097 Namespace Attribute Notices: Supported 00:26:28.097 Firmware Activation Notices: Not Supported 00:26:28.097 ANA Change Notices: Not Supported 00:26:28.097 PLE Aggregate Log Change Notices: Not Supported 00:26:28.097 LBA Status Info Alert Notices: Not Supported 00:26:28.097 EGE Aggregate Log Change Notices: Not Supported 00:26:28.097 Normal NVM Subsystem Shutdown event: Not Supported 00:26:28.097 Zone Descriptor Change Notices: Not Supported 00:26:28.097 Discovery Log Change Notices: Not Supported 00:26:28.097 Controller Attributes 00:26:28.097 128-bit Host Identifier: Supported 00:26:28.097 Non-Operational Permissive Mode: Not Supported 00:26:28.097 NVM Sets: Not Supported 00:26:28.097 Read Recovery Levels: Not Supported 00:26:28.097 Endurance Groups: Not Supported 00:26:28.098 Predictable Latency Mode: Not Supported 00:26:28.098 Traffic Based Keep ALive: Not Supported 00:26:28.098 Namespace Granularity: Not Supported 00:26:28.098 SQ Associations: Not Supported 00:26:28.098 UUID List: Not Supported 00:26:28.098 Multi-Domain Subsystem: Not Supported 00:26:28.098 Fixed Capacity Management: Not Supported 00:26:28.098 Variable Capacity Management: Not Supported 00:26:28.098 Delete Endurance Group: Not Supported 00:26:28.098 Delete NVM Set: Not Supported 00:26:28.098 Extended LBA Formats Supported: Not Supported 00:26:28.098 Flexible Data Placement Supported: Not Supported 00:26:28.098 00:26:28.098 Controller Memory Buffer Support 00:26:28.098 ================================ 00:26:28.098 Supported: No 00:26:28.098 00:26:28.098 Persistent Memory Region Support 00:26:28.098 ================================ 00:26:28.098 Supported: No 00:26:28.098 00:26:28.098 Admin Command Set Attributes 00:26:28.098 ============================ 00:26:28.098 Security Send/Receive: Not Supported 00:26:28.098 Format NVM: Not Supported 00:26:28.098 Firmware Activate/Download: Not Supported 00:26:28.098 Namespace Management: Not Supported 00:26:28.098 Device Self-Test: Not Supported 00:26:28.098 Directives: Not Supported 00:26:28.098 NVMe-MI: Not Supported 00:26:28.098 Virtualization Management: Not Supported 00:26:28.098 Doorbell Buffer Config: Not Supported 00:26:28.098 Get LBA Status Capability: Not Supported 00:26:28.098 Command & Feature Lockdown Capability: Not Supported 00:26:28.098 Abort Command Limit: 4 00:26:28.098 Async Event Request Limit: 4 00:26:28.098 Number of Firmware Slots: N/A 00:26:28.098 Firmware Slot 1 Read-Only: N/A 00:26:28.098 Firmware Activation Without Reset: N/A 00:26:28.098 Multiple Update Detection Support: N/A 00:26:28.098 Firmware Update Granularity: No Information Provided 00:26:28.098 Per-Namespace SMART Log: No 00:26:28.098 Asymmetric Namespace Access Log Page: Not Supported 00:26:28.098 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:28.098 Command Effects Log Page: Supported 00:26:28.098 Get Log Page Extended Data: Supported 00:26:28.098 Telemetry Log Pages: Not Supported 00:26:28.098 Persistent Event Log Pages: Not Supported 00:26:28.098 Supported Log Pages Log Page: May Support 00:26:28.098 Commands Supported & Effects Log Page: Not Supported 00:26:28.098 Feature Identifiers & Effects Log Page:May Support 00:26:28.098 NVMe-MI Commands & Effects Log Page: May Support 00:26:28.098 Data Area 4 for Telemetry Log: Not Supported 00:26:28.098 Error Log Page Entries Supported: 128 00:26:28.098 Keep Alive: Supported 00:26:28.098 Keep Alive Granularity: 10000 ms 00:26:28.098 00:26:28.098 NVM Command Set Attributes 00:26:28.098 ========================== 00:26:28.098 Submission Queue Entry Size 00:26:28.098 Max: 64 00:26:28.098 Min: 64 00:26:28.098 Completion Queue Entry Size 00:26:28.098 Max: 16 00:26:28.098 Min: 16 00:26:28.098 Number of Namespaces: 32 00:26:28.098 Compare Command: Supported 00:26:28.098 Write Uncorrectable Command: Not Supported 00:26:28.098 Dataset Management Command: Supported 00:26:28.098 Write Zeroes Command: Supported 00:26:28.098 Set Features Save Field: Not Supported 00:26:28.098 Reservations: Supported 00:26:28.098 Timestamp: Not Supported 00:26:28.098 Copy: Supported 00:26:28.098 Volatile Write Cache: Present 00:26:28.098 Atomic Write Unit (Normal): 1 00:26:28.098 Atomic Write Unit (PFail): 1 00:26:28.098 Atomic Compare & Write Unit: 1 00:26:28.098 Fused Compare & Write: Supported 00:26:28.098 Scatter-Gather List 00:26:28.098 SGL Command Set: Supported 00:26:28.098 SGL Keyed: Supported 00:26:28.098 SGL Bit Bucket Descriptor: Not Supported 00:26:28.098 SGL Metadata Pointer: Not Supported 00:26:28.098 Oversized SGL: Not Supported 00:26:28.098 SGL Metadata Address: Not Supported 00:26:28.098 SGL Offset: Supported 00:26:28.098 Transport SGL Data Block: Not Supported 00:26:28.098 Replay Protected Memory Block: Not Supported 00:26:28.098 00:26:28.098 Firmware Slot Information 00:26:28.098 ========================= 00:26:28.098 Active slot: 1 00:26:28.098 Slot 1 Firmware Revision: 24.01.1 00:26:28.098 00:26:28.098 00:26:28.098 Commands Supported and Effects 00:26:28.098 ============================== 00:26:28.098 Admin Commands 00:26:28.098 -------------- 00:26:28.098 Get Log Page (02h): Supported 00:26:28.098 Identify (06h): Supported 00:26:28.098 Abort (08h): Supported 00:26:28.098 Set Features (09h): Supported 00:26:28.098 Get Features (0Ah): Supported 00:26:28.098 Asynchronous Event Request (0Ch): Supported 00:26:28.098 Keep Alive (18h): Supported 00:26:28.098 I/O Commands 00:26:28.098 ------------ 00:26:28.098 Flush (00h): Supported LBA-Change 00:26:28.098 Write (01h): Supported LBA-Change 00:26:28.098 Read (02h): Supported 00:26:28.098 Compare (05h): Supported 00:26:28.098 Write Zeroes (08h): Supported LBA-Change 00:26:28.098 Dataset Management (09h): Supported LBA-Change 00:26:28.098 Copy (19h): Supported LBA-Change 00:26:28.098 Unknown (79h): Supported LBA-Change 00:26:28.098 Unknown (7Ah): Supported 00:26:28.098 00:26:28.098 Error Log 00:26:28.098 ========= 00:26:28.098 00:26:28.098 Arbitration 00:26:28.098 =========== 00:26:28.098 Arbitration Burst: 1 00:26:28.098 00:26:28.098 Power Management 00:26:28.098 ================ 00:26:28.098 Number of Power States: 1 00:26:28.098 Current Power State: Power State #0 00:26:28.098 Power State #0: 00:26:28.098 Max Power: 0.00 W 00:26:28.098 Non-Operational State: Operational 00:26:28.098 Entry Latency: Not Reported 00:26:28.098 Exit Latency: Not Reported 00:26:28.098 Relative Read Throughput: 0 00:26:28.098 Relative Read Latency: 0 00:26:28.098 Relative Write Throughput: 0 00:26:28.098 Relative Write Latency: 0 00:26:28.098 Idle Power: Not Reported 00:26:28.098 Active Power: Not Reported 00:26:28.098 Non-Operational Permissive Mode: Not Supported 00:26:28.098 00:26:28.098 Health Information 00:26:28.098 ================== 00:26:28.098 Critical Warnings: 00:26:28.098 Available Spare Space: OK 00:26:28.098 Temperature: OK 00:26:28.098 Device Reliability: OK 00:26:28.098 Read Only: No 00:26:28.098 Volatile Memory Backup: OK 00:26:28.098 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:28.098 Temperature Threshold: [2024-06-09 23:08:56.095253] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.095259] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.095262] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b2d9e0) 00:26:28.098 [2024-06-09 23:08:56.095269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.098 [2024-06-09 23:08:56.095285] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b960d0, cid 7, qid 0 00:26:28.098 [2024-06-09 23:08:56.099415] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.098 [2024-06-09 23:08:56.099427] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.098 [2024-06-09 23:08:56.099431] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.099435] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b960d0) on tqpair=0x1b2d9e0 00:26:28.098 [2024-06-09 23:08:56.099471] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:28.098 [2024-06-09 23:08:56.099483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.098 [2024-06-09 23:08:56.099490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.098 [2024-06-09 23:08:56.099498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.098 [2024-06-09 23:08:56.099505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.098 [2024-06-09 23:08:56.099513] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.099516] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.099520] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b2d9e0) 00:26:28.098 [2024-06-09 23:08:56.099527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.098 [2024-06-09 23:08:56.099542] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95b50, cid 3, qid 0 00:26:28.098 [2024-06-09 23:08:56.099763] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.098 [2024-06-09 23:08:56.099771] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.098 [2024-06-09 23:08:56.099774] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.099778] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95b50) on tqpair=0x1b2d9e0 00:26:28.098 [2024-06-09 23:08:56.099786] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.099790] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.099795] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b2d9e0) 00:26:28.098 [2024-06-09 23:08:56.099806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.098 [2024-06-09 23:08:56.099821] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95b50, cid 3, qid 0 00:26:28.098 [2024-06-09 23:08:56.100069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.098 [2024-06-09 23:08:56.100079] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.098 [2024-06-09 23:08:56.100082] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.100086] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95b50) on tqpair=0x1b2d9e0 00:26:28.098 [2024-06-09 23:08:56.100091] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:28.098 [2024-06-09 23:08:56.100096] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:28.098 [2024-06-09 23:08:56.100105] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.100109] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.100112] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b2d9e0) 00:26:28.098 [2024-06-09 23:08:56.100120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.098 [2024-06-09 23:08:56.100139] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95b50, cid 3, qid 0 00:26:28.098 [2024-06-09 23:08:56.100414] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.098 [2024-06-09 23:08:56.100423] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.098 [2024-06-09 23:08:56.100426] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.100430] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95b50) on tqpair=0x1b2d9e0 00:26:28.098 [2024-06-09 23:08:56.100442] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.100449] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.100453] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b2d9e0) 00:26:28.098 [2024-06-09 23:08:56.100460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.098 [2024-06-09 23:08:56.100472] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95b50, cid 3, qid 0 00:26:28.098 [2024-06-09 23:08:56.100697] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.098 [2024-06-09 23:08:56.100706] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.098 [2024-06-09 23:08:56.100710] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.100713] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95b50) on tqpair=0x1b2d9e0 00:26:28.098 [2024-06-09 23:08:56.100724] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.100728] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.100731] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b2d9e0) 00:26:28.098 [2024-06-09 23:08:56.100738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.098 [2024-06-09 23:08:56.100751] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95b50, cid 3, qid 0 00:26:28.098 [2024-06-09 23:08:56.100976] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.098 [2024-06-09 23:08:56.100985] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.098 [2024-06-09 23:08:56.100989] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.100992] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95b50) on tqpair=0x1b2d9e0 00:26:28.098 [2024-06-09 23:08:56.101003] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.101006] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.101010] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b2d9e0) 00:26:28.098 [2024-06-09 23:08:56.101016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.098 [2024-06-09 23:08:56.101029] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95b50, cid 3, qid 0 00:26:28.098 [2024-06-09 23:08:56.101295] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.098 [2024-06-09 23:08:56.101303] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.098 [2024-06-09 23:08:56.101307] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.101310] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95b50) on tqpair=0x1b2d9e0 00:26:28.098 [2024-06-09 23:08:56.101321] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.101328] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.101331] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b2d9e0) 00:26:28.098 [2024-06-09 23:08:56.101338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.098 [2024-06-09 23:08:56.101349] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95b50, cid 3, qid 0 00:26:28.098 [2024-06-09 23:08:56.101601] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.098 [2024-06-09 23:08:56.101610] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.098 [2024-06-09 23:08:56.101614] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.101617] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95b50) on tqpair=0x1b2d9e0 00:26:28.098 [2024-06-09 23:08:56.101628] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.101632] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.101636] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b2d9e0) 00:26:28.098 [2024-06-09 23:08:56.101642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.098 [2024-06-09 23:08:56.101656] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95b50, cid 3, qid 0 00:26:28.098 [2024-06-09 23:08:56.101877] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.098 [2024-06-09 23:08:56.101886] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.098 [2024-06-09 23:08:56.101890] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.101893] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95b50) on tqpair=0x1b2d9e0 00:26:28.098 [2024-06-09 23:08:56.101904] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.101907] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.101911] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b2d9e0) 00:26:28.098 [2024-06-09 23:08:56.101917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.098 [2024-06-09 23:08:56.101929] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95b50, cid 3, qid 0 00:26:28.098 [2024-06-09 23:08:56.102177] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.098 [2024-06-09 23:08:56.102186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.098 [2024-06-09 23:08:56.102189] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.102193] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95b50) on tqpair=0x1b2d9e0 00:26:28.098 [2024-06-09 23:08:56.102203] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.102207] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.102211] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b2d9e0) 00:26:28.098 [2024-06-09 23:08:56.102217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.098 [2024-06-09 23:08:56.102230] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95b50, cid 3, qid 0 00:26:28.098 [2024-06-09 23:08:56.102502] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.098 [2024-06-09 23:08:56.102509] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.098 [2024-06-09 23:08:56.102513] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.102517] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95b50) on tqpair=0x1b2d9e0 00:26:28.098 [2024-06-09 23:08:56.102528] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.102534] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.098 [2024-06-09 23:08:56.102537] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b2d9e0) 00:26:28.098 [2024-06-09 23:08:56.102544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.099 [2024-06-09 23:08:56.102556] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95b50, cid 3, qid 0 00:26:28.099 [2024-06-09 23:08:56.102813] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.099 [2024-06-09 23:08:56.102826] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.099 [2024-06-09 23:08:56.102830] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.099 [2024-06-09 23:08:56.102834] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95b50) on tqpair=0x1b2d9e0 00:26:28.099 [2024-06-09 23:08:56.102845] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.099 [2024-06-09 23:08:56.102848] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.099 [2024-06-09 23:08:56.102852] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b2d9e0) 00:26:28.099 [2024-06-09 23:08:56.102858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.099 [2024-06-09 23:08:56.102873] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95b50, cid 3, qid 0 00:26:28.099 [2024-06-09 23:08:56.103168] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.099 [2024-06-09 23:08:56.103177] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.099 [2024-06-09 23:08:56.103181] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.099 [2024-06-09 23:08:56.103184] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95b50) on tqpair=0x1b2d9e0 00:26:28.099 [2024-06-09 23:08:56.103194] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.099 [2024-06-09 23:08:56.103198] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.099 [2024-06-09 23:08:56.103202] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b2d9e0) 00:26:28.099 [2024-06-09 23:08:56.103208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.099 [2024-06-09 23:08:56.103221] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95b50, cid 3, qid 0 00:26:28.099 [2024-06-09 23:08:56.107411] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.099 [2024-06-09 23:08:56.107423] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.099 [2024-06-09 23:08:56.107426] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.099 [2024-06-09 23:08:56.107430] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95b50) on tqpair=0x1b2d9e0 00:26:28.099 [2024-06-09 23:08:56.107444] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:28.099 [2024-06-09 23:08:56.107450] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:28.099 [2024-06-09 23:08:56.107453] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b2d9e0) 00:26:28.099 [2024-06-09 23:08:56.107460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.099 [2024-06-09 23:08:56.107474] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b95b50, cid 3, qid 0 00:26:28.099 [2024-06-09 23:08:56.107749] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:28.099 [2024-06-09 23:08:56.107760] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:28.099 [2024-06-09 23:08:56.107764] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:28.099 [2024-06-09 23:08:56.107768] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b95b50) on tqpair=0x1b2d9e0 00:26:28.099 [2024-06-09 23:08:56.107776] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:26:28.099 0 Kelvin (-273 Celsius) 00:26:28.099 Available Spare: 0% 00:26:28.099 Available Spare Threshold: 0% 00:26:28.099 Life Percentage Used: 0% 00:26:28.099 Data Units Read: 0 00:26:28.099 Data Units Written: 0 00:26:28.099 Host Read Commands: 0 00:26:28.099 Host Write Commands: 0 00:26:28.099 Controller Busy Time: 0 minutes 00:26:28.099 Power Cycles: 0 00:26:28.099 Power On Hours: 0 hours 00:26:28.099 Unsafe Shutdowns: 0 00:26:28.099 Unrecoverable Media Errors: 0 00:26:28.099 Lifetime Error Log Entries: 0 00:26:28.099 Warning Temperature Time: 0 minutes 00:26:28.099 Critical Temperature Time: 0 minutes 00:26:28.099 00:26:28.099 Number of Queues 00:26:28.099 ================ 00:26:28.099 Number of I/O Submission Queues: 127 00:26:28.099 Number of I/O Completion Queues: 127 00:26:28.099 00:26:28.099 Active Namespaces 00:26:28.099 ================= 00:26:28.099 Namespace ID:1 00:26:28.099 Error Recovery Timeout: Unlimited 00:26:28.099 Command Set Identifier: NVM (00h) 00:26:28.099 Deallocate: Supported 00:26:28.099 Deallocated/Unwritten Error: Not Supported 00:26:28.099 Deallocated Read Value: Unknown 00:26:28.099 Deallocate in Write Zeroes: Not Supported 00:26:28.099 Deallocated Guard Field: 0xFFFF 00:26:28.099 Flush: Supported 00:26:28.099 Reservation: Supported 00:26:28.099 Namespace Sharing Capabilities: Multiple Controllers 00:26:28.099 Size (in LBAs): 131072 (0GiB) 00:26:28.099 Capacity (in LBAs): 131072 (0GiB) 00:26:28.099 Utilization (in LBAs): 131072 (0GiB) 00:26:28.099 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:28.099 EUI64: ABCDEF0123456789 00:26:28.099 UUID: 81fbd0fa-a804-4d55-917b-f35feffd90d2 00:26:28.099 Thin Provisioning: Not Supported 00:26:28.099 Per-NS Atomic Units: Yes 00:26:28.099 Atomic Boundary Size (Normal): 0 00:26:28.099 Atomic Boundary Size (PFail): 0 00:26:28.099 Atomic Boundary Offset: 0 00:26:28.099 Maximum Single Source Range Length: 65535 00:26:28.099 Maximum Copy Length: 65535 00:26:28.099 Maximum Source Range Count: 1 00:26:28.099 NGUID/EUI64 Never Reused: No 00:26:28.099 Namespace Write Protected: No 00:26:28.099 Number of LBA Formats: 1 00:26:28.099 Current LBA Format: LBA Format #00 00:26:28.099 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:28.099 00:26:28.099 23:08:56 -- host/identify.sh@51 -- # sync 00:26:28.099 23:08:56 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:28.099 23:08:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:28.099 23:08:56 -- common/autotest_common.sh@10 -- # set +x 00:26:28.099 23:08:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:28.099 23:08:56 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:28.099 23:08:56 -- host/identify.sh@56 -- # nvmftestfini 00:26:28.099 23:08:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:28.099 23:08:56 -- nvmf/common.sh@116 -- # sync 00:26:28.099 23:08:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:28.099 23:08:56 -- nvmf/common.sh@119 -- # set +e 00:26:28.099 23:08:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:28.099 23:08:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:28.099 rmmod nvme_tcp 00:26:28.099 rmmod nvme_fabrics 00:26:28.099 rmmod nvme_keyring 00:26:28.099 23:08:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:28.099 23:08:56 -- nvmf/common.sh@123 -- # set -e 00:26:28.099 23:08:56 -- nvmf/common.sh@124 -- # return 0 00:26:28.099 23:08:56 -- nvmf/common.sh@477 -- # '[' -n 33302 ']' 00:26:28.099 23:08:56 -- nvmf/common.sh@478 -- # killprocess 33302 00:26:28.099 23:08:56 -- common/autotest_common.sh@926 -- # '[' -z 33302 ']' 00:26:28.099 23:08:56 -- common/autotest_common.sh@930 -- # kill -0 33302 00:26:28.099 23:08:56 -- common/autotest_common.sh@931 -- # uname 00:26:28.099 23:08:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:28.099 23:08:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 33302 00:26:28.099 23:08:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:28.099 23:08:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:28.099 23:08:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 33302' 00:26:28.099 killing process with pid 33302 00:26:28.099 23:08:56 -- common/autotest_common.sh@945 -- # kill 33302 00:26:28.099 [2024-06-09 23:08:56.255985] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:28.099 23:08:56 -- common/autotest_common.sh@950 -- # wait 33302 00:26:28.360 23:08:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:28.360 23:08:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:28.360 23:08:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:28.360 23:08:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:28.360 23:08:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:28.360 23:08:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.360 23:08:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.360 23:08:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.909 23:08:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:30.909 00:26:30.909 real 0m10.848s 00:26:30.909 user 0m7.599s 00:26:30.909 sys 0m5.623s 00:26:30.909 23:08:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:30.909 23:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:30.909 ************************************ 00:26:30.909 END TEST nvmf_identify 00:26:30.909 ************************************ 00:26:30.909 23:08:58 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:30.909 23:08:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:30.909 23:08:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:30.909 23:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:30.909 ************************************ 00:26:30.909 START TEST nvmf_perf 00:26:30.909 ************************************ 00:26:30.909 23:08:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:30.909 * Looking for test storage... 00:26:30.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:30.909 23:08:58 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:30.909 23:08:58 -- nvmf/common.sh@7 -- # uname -s 00:26:30.909 23:08:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.909 23:08:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.909 23:08:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.909 23:08:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.909 23:08:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.909 23:08:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.909 23:08:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.909 23:08:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.909 23:08:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.909 23:08:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.909 23:08:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:30.909 23:08:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:30.909 23:08:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.909 23:08:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.909 23:08:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:30.909 23:08:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:30.909 23:08:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.909 23:08:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.909 23:08:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.909 23:08:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.909 23:08:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.909 23:08:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.909 23:08:58 -- paths/export.sh@5 -- # export PATH 00:26:30.910 23:08:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.910 23:08:58 -- nvmf/common.sh@46 -- # : 0 00:26:30.910 23:08:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:30.910 23:08:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:30.910 23:08:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:30.910 23:08:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.910 23:08:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.910 23:08:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:30.910 23:08:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:30.910 23:08:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:30.910 23:08:58 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:30.910 23:08:58 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:30.910 23:08:58 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:30.910 23:08:58 -- host/perf.sh@17 -- # nvmftestinit 00:26:30.910 23:08:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:30.910 23:08:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.910 23:08:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:30.910 23:08:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:30.910 23:08:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:30.910 23:08:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.910 23:08:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.910 23:08:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.910 23:08:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:30.910 23:08:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:30.910 23:08:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:30.910 23:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:37.534 23:09:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:37.534 23:09:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:37.534 23:09:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:37.534 23:09:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:37.534 23:09:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:37.534 23:09:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:37.534 23:09:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:37.534 23:09:05 -- nvmf/common.sh@294 -- # net_devs=() 00:26:37.534 23:09:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:37.534 23:09:05 -- nvmf/common.sh@295 -- # e810=() 00:26:37.534 23:09:05 -- nvmf/common.sh@295 -- # local -ga e810 00:26:37.534 23:09:05 -- nvmf/common.sh@296 -- # x722=() 00:26:37.534 23:09:05 -- nvmf/common.sh@296 -- # local -ga x722 00:26:37.534 23:09:05 -- nvmf/common.sh@297 -- # mlx=() 00:26:37.534 23:09:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:37.534 23:09:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:37.534 23:09:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:37.534 23:09:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:37.534 23:09:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:37.534 23:09:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:37.534 23:09:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:37.534 23:09:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:37.534 23:09:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:37.534 23:09:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:37.534 23:09:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:37.534 23:09:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:37.534 23:09:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:37.534 23:09:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:37.534 23:09:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:37.534 23:09:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:37.534 23:09:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:37.534 23:09:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:37.534 23:09:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:37.534 23:09:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:37.534 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:37.534 23:09:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:37.534 23:09:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:37.534 23:09:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.534 23:09:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.534 23:09:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:37.534 23:09:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:37.534 23:09:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:37.534 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:37.534 23:09:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:37.534 23:09:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:37.534 23:09:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.534 23:09:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.534 23:09:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:37.534 23:09:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:37.534 23:09:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:37.534 23:09:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:37.534 23:09:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:37.534 23:09:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.534 23:09:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:37.534 23:09:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.534 23:09:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:37.534 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:37.534 23:09:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.534 23:09:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:37.535 23:09:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.535 23:09:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:37.535 23:09:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.535 23:09:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:37.535 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:37.535 23:09:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.535 23:09:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:37.535 23:09:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:37.535 23:09:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:37.535 23:09:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:37.535 23:09:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:37.535 23:09:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:37.535 23:09:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:37.535 23:09:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:37.535 23:09:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:37.535 23:09:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:37.535 23:09:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:37.535 23:09:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:37.535 23:09:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:37.535 23:09:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:37.535 23:09:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:37.535 23:09:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:37.535 23:09:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:37.535 23:09:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:37.535 23:09:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:37.535 23:09:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:37.535 23:09:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:37.535 23:09:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.535 23:09:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:37.535 23:09:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:37.535 23:09:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:37.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.713 ms 00:26:37.535 00:26:37.535 --- 10.0.0.2 ping statistics --- 00:26:37.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.535 rtt min/avg/max/mdev = 0.713/0.713/0.713/0.000 ms 00:26:37.535 23:09:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:37.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.446 ms 00:26:37.535 00:26:37.535 --- 10.0.0.1 ping statistics --- 00:26:37.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.535 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:26:37.535 23:09:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.535 23:09:05 -- nvmf/common.sh@410 -- # return 0 00:26:37.535 23:09:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:37.535 23:09:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.535 23:09:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:37.535 23:09:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:37.535 23:09:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.535 23:09:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:37.535 23:09:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:37.535 23:09:05 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:37.535 23:09:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:37.535 23:09:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:37.535 23:09:05 -- common/autotest_common.sh@10 -- # set +x 00:26:37.535 23:09:05 -- nvmf/common.sh@469 -- # nvmfpid=37777 00:26:37.535 23:09:05 -- nvmf/common.sh@470 -- # waitforlisten 37777 00:26:37.535 23:09:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:37.535 23:09:05 -- common/autotest_common.sh@819 -- # '[' -z 37777 ']' 00:26:37.535 23:09:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.535 23:09:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:37.535 23:09:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.535 23:09:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:37.535 23:09:05 -- common/autotest_common.sh@10 -- # set +x 00:26:37.535 [2024-06-09 23:09:05.667500] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:37.535 [2024-06-09 23:09:05.667565] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.535 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.796 [2024-06-09 23:09:05.737476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:37.796 [2024-06-09 23:09:05.809913] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:37.796 [2024-06-09 23:09:05.810054] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.796 [2024-06-09 23:09:05.810065] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.796 [2024-06-09 23:09:05.810074] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.796 [2024-06-09 23:09:05.810197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.796 [2024-06-09 23:09:05.810316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:37.796 [2024-06-09 23:09:05.810476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.796 [2024-06-09 23:09:05.810476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:38.369 23:09:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:38.369 23:09:06 -- common/autotest_common.sh@852 -- # return 0 00:26:38.369 23:09:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:38.369 23:09:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:38.369 23:09:06 -- common/autotest_common.sh@10 -- # set +x 00:26:38.369 23:09:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.369 23:09:06 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:38.369 23:09:06 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:38.942 23:09:06 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:38.942 23:09:06 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:39.202 23:09:07 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:26:39.202 23:09:07 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:39.202 23:09:07 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:39.202 23:09:07 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:26:39.202 23:09:07 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:39.202 23:09:07 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:39.202 23:09:07 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:39.463 [2024-06-09 23:09:07.433554] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:39.463 23:09:07 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:39.463 23:09:07 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:39.463 23:09:07 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:39.724 23:09:07 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:39.724 23:09:07 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:39.986 23:09:07 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:39.986 [2024-06-09 23:09:08.100149] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:39.986 23:09:08 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:40.247 23:09:08 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:26:40.247 23:09:08 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:40.247 23:09:08 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:40.247 23:09:08 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:26:41.636 Initializing NVMe Controllers 00:26:41.636 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:26:41.636 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:26:41.636 Initialization complete. Launching workers. 00:26:41.636 ======================================================== 00:26:41.636 Latency(us) 00:26:41.636 Device Information : IOPS MiB/s Average min max 00:26:41.636 PCIE (0000:65:00.0) NSID 1 from core 0: 80953.20 316.22 394.60 13.05 6457.33 00:26:41.636 ======================================================== 00:26:41.636 Total : 80953.20 316.22 394.60 13.05 6457.33 00:26:41.636 00:26:41.637 23:09:09 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:41.637 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.023 Initializing NVMe Controllers 00:26:43.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:43.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:43.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:43.023 Initialization complete. Launching workers. 00:26:43.023 ======================================================== 00:26:43.023 Latency(us) 00:26:43.023 Device Information : IOPS MiB/s Average min max 00:26:43.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 65.00 0.25 15739.68 661.59 46488.87 00:26:43.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 47.00 0.18 21544.41 6983.85 47903.79 00:26:43.023 ======================================================== 00:26:43.023 Total : 112.00 0.44 18175.59 661.59 47903.79 00:26:43.023 00:26:43.023 23:09:10 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:43.023 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.453 Initializing NVMe Controllers 00:26:44.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:44.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:44.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:44.453 Initialization complete. Launching workers. 00:26:44.453 ======================================================== 00:26:44.453 Latency(us) 00:26:44.453 Device Information : IOPS MiB/s Average min max 00:26:44.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9207.99 35.97 3489.78 637.32 8355.56 00:26:44.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3949.00 15.43 8153.91 5537.46 17552.64 00:26:44.453 ======================================================== 00:26:44.453 Total : 13156.99 51.39 4889.70 637.32 17552.64 00:26:44.453 00:26:44.453 23:09:12 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:44.453 23:09:12 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:44.453 23:09:12 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:44.453 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.003 Initializing NVMe Controllers 00:26:47.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:47.003 Controller IO queue size 128, less than required. 00:26:47.003 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.003 Controller IO queue size 128, less than required. 00:26:47.003 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:47.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:47.003 Initialization complete. Launching workers. 00:26:47.003 ======================================================== 00:26:47.003 Latency(us) 00:26:47.003 Device Information : IOPS MiB/s Average min max 00:26:47.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 877.86 219.46 152291.54 91122.81 251536.10 00:26:47.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 536.91 134.23 246586.71 69898.59 431829.43 00:26:47.003 ======================================================== 00:26:47.004 Total : 1414.77 353.69 188077.06 69898.59 431829.43 00:26:47.004 00:26:47.004 23:09:14 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:47.004 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.004 No valid NVMe controllers or AIO or URING devices found 00:26:47.004 Initializing NVMe Controllers 00:26:47.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:47.004 Controller IO queue size 128, less than required. 00:26:47.004 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.004 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:47.004 Controller IO queue size 128, less than required. 00:26:47.004 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.004 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:47.004 WARNING: Some requested NVMe devices were skipped 00:26:47.004 23:09:15 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:47.266 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.815 Initializing NVMe Controllers 00:26:49.815 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:49.815 Controller IO queue size 128, less than required. 00:26:49.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:49.815 Controller IO queue size 128, less than required. 00:26:49.815 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:49.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:49.815 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:49.815 Initialization complete. Launching workers. 00:26:49.815 00:26:49.815 ==================== 00:26:49.815 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:49.815 TCP transport: 00:26:49.815 polls: 56153 00:26:49.815 idle_polls: 19114 00:26:49.815 sock_completions: 37039 00:26:49.815 nvme_completions: 2213 00:26:49.815 submitted_requests: 3395 00:26:49.815 queued_requests: 1 00:26:49.815 00:26:49.815 ==================== 00:26:49.815 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:49.815 TCP transport: 00:26:49.815 polls: 56527 00:26:49.815 idle_polls: 18456 00:26:49.815 sock_completions: 38071 00:26:49.815 nvme_completions: 2543 00:26:49.815 submitted_requests: 3973 00:26:49.815 queued_requests: 1 00:26:49.815 ======================================================== 00:26:49.815 Latency(us) 00:26:49.815 Device Information : IOPS MiB/s Average min max 00:26:49.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 616.84 154.21 220231.38 119274.96 306317.20 00:26:49.815 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 699.32 174.83 193638.09 106685.19 285506.30 00:26:49.815 ======================================================== 00:26:49.815 Total : 1316.17 329.04 206101.49 106685.19 306317.20 00:26:49.815 00:26:49.815 23:09:17 -- host/perf.sh@66 -- # sync 00:26:49.815 23:09:17 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:50.079 23:09:18 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:26:50.079 23:09:18 -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:26:50.079 23:09:18 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:26:51.022 23:09:19 -- host/perf.sh@72 -- # ls_guid=56f37639-3ce9-4949-a333-42e13853429d 00:26:51.022 23:09:19 -- host/perf.sh@73 -- # get_lvs_free_mb 56f37639-3ce9-4949-a333-42e13853429d 00:26:51.022 23:09:19 -- common/autotest_common.sh@1343 -- # local lvs_uuid=56f37639-3ce9-4949-a333-42e13853429d 00:26:51.022 23:09:19 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:51.022 23:09:19 -- common/autotest_common.sh@1345 -- # local fc 00:26:51.022 23:09:19 -- common/autotest_common.sh@1346 -- # local cs 00:26:51.022 23:09:19 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:51.283 23:09:19 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:51.283 { 00:26:51.283 "uuid": "56f37639-3ce9-4949-a333-42e13853429d", 00:26:51.283 "name": "lvs_0", 00:26:51.283 "base_bdev": "Nvme0n1", 00:26:51.283 "total_data_clusters": 457407, 00:26:51.283 "free_clusters": 457407, 00:26:51.283 "block_size": 512, 00:26:51.283 "cluster_size": 4194304 00:26:51.283 } 00:26:51.283 ]' 00:26:51.283 23:09:19 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="56f37639-3ce9-4949-a333-42e13853429d") .free_clusters' 00:26:51.283 23:09:19 -- common/autotest_common.sh@1348 -- # fc=457407 00:26:51.283 23:09:19 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="56f37639-3ce9-4949-a333-42e13853429d") .cluster_size' 00:26:51.283 23:09:19 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:51.283 23:09:19 -- common/autotest_common.sh@1352 -- # free_mb=1829628 00:26:51.283 23:09:19 -- common/autotest_common.sh@1353 -- # echo 1829628 00:26:51.283 1829628 00:26:51.283 23:09:19 -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:26:51.283 23:09:19 -- host/perf.sh@78 -- # free_mb=20480 00:26:51.283 23:09:19 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 56f37639-3ce9-4949-a333-42e13853429d lbd_0 20480 00:26:51.544 23:09:19 -- host/perf.sh@80 -- # lb_guid=3d4a771e-eea1-4d3a-92c8-5788e054cd28 00:26:51.544 23:09:19 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 3d4a771e-eea1-4d3a-92c8-5788e054cd28 lvs_n_0 00:26:53.460 23:09:21 -- host/perf.sh@83 -- # ls_nested_guid=0ebe973f-b8a0-4c65-8212-239b1d42825d 00:26:53.460 23:09:21 -- host/perf.sh@84 -- # get_lvs_free_mb 0ebe973f-b8a0-4c65-8212-239b1d42825d 00:26:53.460 23:09:21 -- common/autotest_common.sh@1343 -- # local lvs_uuid=0ebe973f-b8a0-4c65-8212-239b1d42825d 00:26:53.460 23:09:21 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:53.460 23:09:21 -- common/autotest_common.sh@1345 -- # local fc 00:26:53.460 23:09:21 -- common/autotest_common.sh@1346 -- # local cs 00:26:53.460 23:09:21 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:53.460 23:09:21 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:53.460 { 00:26:53.460 "uuid": "56f37639-3ce9-4949-a333-42e13853429d", 00:26:53.460 "name": "lvs_0", 00:26:53.460 "base_bdev": "Nvme0n1", 00:26:53.460 "total_data_clusters": 457407, 00:26:53.460 "free_clusters": 452287, 00:26:53.460 "block_size": 512, 00:26:53.460 "cluster_size": 4194304 00:26:53.460 }, 00:26:53.460 { 00:26:53.460 "uuid": "0ebe973f-b8a0-4c65-8212-239b1d42825d", 00:26:53.460 "name": "lvs_n_0", 00:26:53.460 "base_bdev": "3d4a771e-eea1-4d3a-92c8-5788e054cd28", 00:26:53.460 "total_data_clusters": 5114, 00:26:53.460 "free_clusters": 5114, 00:26:53.460 "block_size": 512, 00:26:53.460 "cluster_size": 4194304 00:26:53.460 } 00:26:53.460 ]' 00:26:53.460 23:09:21 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="0ebe973f-b8a0-4c65-8212-239b1d42825d") .free_clusters' 00:26:53.460 23:09:21 -- common/autotest_common.sh@1348 -- # fc=5114 00:26:53.460 23:09:21 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="0ebe973f-b8a0-4c65-8212-239b1d42825d") .cluster_size' 00:26:53.460 23:09:21 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:53.460 23:09:21 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:26:53.460 23:09:21 -- common/autotest_common.sh@1353 -- # echo 20456 00:26:53.460 20456 00:26:53.460 23:09:21 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:26:53.460 23:09:21 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0ebe973f-b8a0-4c65-8212-239b1d42825d lbd_nest_0 20456 00:26:53.460 23:09:21 -- host/perf.sh@88 -- # lb_nested_guid=64052eb1-f7ac-4544-a2d1-6772b795de29 00:26:53.460 23:09:21 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:53.720 23:09:21 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:26:53.720 23:09:21 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 64052eb1-f7ac-4544-a2d1-6772b795de29 00:26:53.981 23:09:21 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:53.981 23:09:22 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:26:53.981 23:09:22 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:26:53.981 23:09:22 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:53.981 23:09:22 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:53.981 23:09:22 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:53.981 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.220 Initializing NVMe Controllers 00:27:06.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:06.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:06.220 Initialization complete. Launching workers. 00:27:06.220 ======================================================== 00:27:06.220 Latency(us) 00:27:06.220 Device Information : IOPS MiB/s Average min max 00:27:06.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.10 0.02 22695.29 249.46 47887.37 00:27:06.220 ======================================================== 00:27:06.220 Total : 44.10 0.02 22695.29 249.46 47887.37 00:27:06.220 00:27:06.220 23:09:32 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:06.220 23:09:32 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:06.220 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.231 Initializing NVMe Controllers 00:27:16.231 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:16.231 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:16.231 Initialization complete. Launching workers. 00:27:16.231 ======================================================== 00:27:16.231 Latency(us) 00:27:16.231 Device Information : IOPS MiB/s Average min max 00:27:16.231 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.90 10.11 12369.81 6983.47 47886.04 00:27:16.231 ======================================================== 00:27:16.231 Total : 80.90 10.11 12369.81 6983.47 47886.04 00:27:16.231 00:27:16.231 23:09:42 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:16.231 23:09:42 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:16.231 23:09:42 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:16.231 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.271 Initializing NVMe Controllers 00:27:26.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:26.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:26.271 Initialization complete. Launching workers. 00:27:26.271 ======================================================== 00:27:26.271 Latency(us) 00:27:26.271 Device Information : IOPS MiB/s Average min max 00:27:26.271 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9286.60 4.53 3445.70 273.91 6759.50 00:27:26.271 ======================================================== 00:27:26.271 Total : 9286.60 4.53 3445.70 273.91 6759.50 00:27:26.271 00:27:26.271 23:09:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:26.271 23:09:53 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:26.271 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.278 Initializing NVMe Controllers 00:27:36.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:36.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:36.278 Initialization complete. Launching workers. 00:27:36.278 ======================================================== 00:27:36.278 Latency(us) 00:27:36.278 Device Information : IOPS MiB/s Average min max 00:27:36.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1572.60 196.57 20369.62 1017.31 44639.87 00:27:36.278 ======================================================== 00:27:36.278 Total : 1572.60 196.57 20369.62 1017.31 44639.87 00:27:36.278 00:27:36.278 23:10:03 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:36.278 23:10:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:36.278 23:10:03 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:36.278 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.279 Initializing NVMe Controllers 00:27:46.279 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:46.279 Controller IO queue size 128, less than required. 00:27:46.279 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:46.279 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:46.279 Initialization complete. Launching workers. 00:27:46.279 ======================================================== 00:27:46.279 Latency(us) 00:27:46.279 Device Information : IOPS MiB/s Average min max 00:27:46.279 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12074.30 5.90 10602.22 1675.87 26745.41 00:27:46.279 ======================================================== 00:27:46.279 Total : 12074.30 5.90 10602.22 1675.87 26745.41 00:27:46.279 00:27:46.279 23:10:14 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:46.279 23:10:14 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:46.279 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.537 Initializing NVMe Controllers 00:27:58.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:58.537 Controller IO queue size 128, less than required. 00:27:58.537 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:58.537 Initialization complete. Launching workers. 00:27:58.537 ======================================================== 00:27:58.537 Latency(us) 00:27:58.537 Device Information : IOPS MiB/s Average min max 00:27:58.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1147.80 143.48 111672.83 23007.50 238596.66 00:27:58.537 ======================================================== 00:27:58.537 Total : 1147.80 143.48 111672.83 23007.50 238596.66 00:27:58.537 00:27:58.537 23:10:24 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:58.537 23:10:24 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 64052eb1-f7ac-4544-a2d1-6772b795de29 00:27:58.537 23:10:26 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:58.537 23:10:26 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3d4a771e-eea1-4d3a-92c8-5788e054cd28 00:27:58.537 23:10:26 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:58.797 23:10:26 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:58.797 23:10:26 -- host/perf.sh@114 -- # nvmftestfini 00:27:58.797 23:10:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:58.797 23:10:26 -- nvmf/common.sh@116 -- # sync 00:27:58.797 23:10:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:58.797 23:10:26 -- nvmf/common.sh@119 -- # set +e 00:27:58.797 23:10:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:58.797 23:10:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:58.797 rmmod nvme_tcp 00:27:58.797 rmmod nvme_fabrics 00:27:58.797 rmmod nvme_keyring 00:27:58.797 23:10:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:58.797 23:10:26 -- nvmf/common.sh@123 -- # set -e 00:27:58.797 23:10:26 -- nvmf/common.sh@124 -- # return 0 00:27:58.797 23:10:26 -- nvmf/common.sh@477 -- # '[' -n 37777 ']' 00:27:58.797 23:10:26 -- nvmf/common.sh@478 -- # killprocess 37777 00:27:58.797 23:10:26 -- common/autotest_common.sh@926 -- # '[' -z 37777 ']' 00:27:58.798 23:10:26 -- common/autotest_common.sh@930 -- # kill -0 37777 00:27:58.798 23:10:26 -- common/autotest_common.sh@931 -- # uname 00:27:58.798 23:10:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:58.798 23:10:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 37777 00:27:59.058 23:10:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:59.058 23:10:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:59.058 23:10:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 37777' 00:27:59.058 killing process with pid 37777 00:27:59.058 23:10:26 -- common/autotest_common.sh@945 -- # kill 37777 00:27:59.058 23:10:26 -- common/autotest_common.sh@950 -- # wait 37777 00:28:00.970 23:10:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:00.970 23:10:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:00.970 23:10:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:00.970 23:10:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:00.970 23:10:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:00.970 23:10:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.970 23:10:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:00.970 23:10:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.883 23:10:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:02.883 00:28:02.883 real 1m32.523s 00:28:02.883 user 5m27.700s 00:28:02.883 sys 0m13.576s 00:28:02.883 23:10:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.883 23:10:31 -- common/autotest_common.sh@10 -- # set +x 00:28:02.883 ************************************ 00:28:02.883 END TEST nvmf_perf 00:28:02.883 ************************************ 00:28:03.145 23:10:31 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:03.145 23:10:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:03.145 23:10:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:03.145 23:10:31 -- common/autotest_common.sh@10 -- # set +x 00:28:03.145 ************************************ 00:28:03.145 START TEST nvmf_fio_host 00:28:03.145 ************************************ 00:28:03.145 23:10:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:03.145 * Looking for test storage... 00:28:03.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.145 23:10:31 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.145 23:10:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.145 23:10:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.145 23:10:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.145 23:10:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.145 23:10:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.145 23:10:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.145 23:10:31 -- paths/export.sh@5 -- # export PATH 00:28:03.145 23:10:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.145 23:10:31 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.145 23:10:31 -- nvmf/common.sh@7 -- # uname -s 00:28:03.145 23:10:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.145 23:10:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.145 23:10:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.145 23:10:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.145 23:10:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.145 23:10:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.145 23:10:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.145 23:10:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.145 23:10:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.145 23:10:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.145 23:10:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:03.145 23:10:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:03.145 23:10:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.145 23:10:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.145 23:10:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.145 23:10:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.145 23:10:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.145 23:10:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.145 23:10:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.145 23:10:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.145 23:10:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.145 23:10:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.145 23:10:31 -- paths/export.sh@5 -- # export PATH 00:28:03.145 23:10:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.145 23:10:31 -- nvmf/common.sh@46 -- # : 0 00:28:03.145 23:10:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:03.145 23:10:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:03.145 23:10:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:03.145 23:10:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.145 23:10:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.145 23:10:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:03.145 23:10:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:03.145 23:10:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:03.145 23:10:31 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:03.145 23:10:31 -- host/fio.sh@14 -- # nvmftestinit 00:28:03.145 23:10:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:03.145 23:10:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.145 23:10:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:03.145 23:10:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:03.145 23:10:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:03.145 23:10:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.145 23:10:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.145 23:10:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.145 23:10:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:03.145 23:10:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:03.145 23:10:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:03.145 23:10:31 -- common/autotest_common.sh@10 -- # set +x 00:28:09.738 23:10:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:09.738 23:10:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:09.738 23:10:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:09.738 23:10:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:09.738 23:10:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:09.738 23:10:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:09.738 23:10:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:09.738 23:10:37 -- nvmf/common.sh@294 -- # net_devs=() 00:28:09.738 23:10:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:09.738 23:10:37 -- nvmf/common.sh@295 -- # e810=() 00:28:09.738 23:10:37 -- nvmf/common.sh@295 -- # local -ga e810 00:28:09.738 23:10:37 -- nvmf/common.sh@296 -- # x722=() 00:28:09.738 23:10:37 -- nvmf/common.sh@296 -- # local -ga x722 00:28:09.738 23:10:37 -- nvmf/common.sh@297 -- # mlx=() 00:28:09.738 23:10:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:09.738 23:10:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.738 23:10:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.738 23:10:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.738 23:10:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.738 23:10:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.738 23:10:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.738 23:10:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.738 23:10:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.738 23:10:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.738 23:10:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.738 23:10:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.738 23:10:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:09.738 23:10:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:09.738 23:10:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:09.738 23:10:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:09.738 23:10:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:09.738 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:09.738 23:10:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:09.738 23:10:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:09.738 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:09.738 23:10:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:09.738 23:10:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:09.738 23:10:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.738 23:10:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:09.738 23:10:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.738 23:10:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:09.738 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:09.738 23:10:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.738 23:10:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:09.738 23:10:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.738 23:10:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:09.738 23:10:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.738 23:10:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:09.738 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:09.738 23:10:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.738 23:10:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:09.738 23:10:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:09.738 23:10:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:09.738 23:10:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:09.738 23:10:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.738 23:10:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.738 23:10:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.738 23:10:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:09.738 23:10:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.738 23:10:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.738 23:10:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:09.738 23:10:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.738 23:10:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.738 23:10:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:09.738 23:10:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:09.738 23:10:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.738 23:10:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.029 23:10:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.029 23:10:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.029 23:10:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:10.029 23:10:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.029 23:10:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.029 23:10:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.029 23:10:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:10.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.538 ms 00:28:10.029 00:28:10.029 --- 10.0.0.2 ping statistics --- 00:28:10.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.029 rtt min/avg/max/mdev = 0.538/0.538/0.538/0.000 ms 00:28:10.029 23:10:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:28:10.029 00:28:10.029 --- 10.0.0.1 ping statistics --- 00:28:10.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.029 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:28:10.029 23:10:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.029 23:10:38 -- nvmf/common.sh@410 -- # return 0 00:28:10.029 23:10:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:10.029 23:10:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.029 23:10:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:10.029 23:10:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:10.029 23:10:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.029 23:10:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:10.029 23:10:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:10.291 23:10:38 -- host/fio.sh@16 -- # [[ y != y ]] 00:28:10.291 23:10:38 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:10.291 23:10:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:10.291 23:10:38 -- common/autotest_common.sh@10 -- # set +x 00:28:10.291 23:10:38 -- host/fio.sh@24 -- # nvmfpid=58536 00:28:10.291 23:10:38 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:10.291 23:10:38 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:10.291 23:10:38 -- host/fio.sh@28 -- # waitforlisten 58536 00:28:10.291 23:10:38 -- common/autotest_common.sh@819 -- # '[' -z 58536 ']' 00:28:10.291 23:10:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.291 23:10:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:10.291 23:10:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.291 23:10:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:10.291 23:10:38 -- common/autotest_common.sh@10 -- # set +x 00:28:10.291 [2024-06-09 23:10:38.262232] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:10.291 [2024-06-09 23:10:38.262280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:10.291 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.291 [2024-06-09 23:10:38.330176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:10.291 [2024-06-09 23:10:38.393241] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:10.291 [2024-06-09 23:10:38.393375] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:10.291 [2024-06-09 23:10:38.393386] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:10.291 [2024-06-09 23:10:38.393394] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:10.291 [2024-06-09 23:10:38.397419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.291 [2024-06-09 23:10:38.397482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:10.291 [2024-06-09 23:10:38.397774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:10.291 [2024-06-09 23:10:38.397867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.864 23:10:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:10.864 23:10:39 -- common/autotest_common.sh@852 -- # return 0 00:28:10.864 23:10:39 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:11.126 [2024-06-09 23:10:39.163795] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.126 23:10:39 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:11.126 23:10:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:11.126 23:10:39 -- common/autotest_common.sh@10 -- # set +x 00:28:11.126 23:10:39 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:11.387 Malloc1 00:28:11.387 23:10:39 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:11.387 23:10:39 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:11.649 23:10:39 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:11.911 [2024-06-09 23:10:39.861266] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.911 23:10:39 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:11.911 23:10:40 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:11.911 23:10:40 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:11.911 23:10:40 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:11.911 23:10:40 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:11.911 23:10:40 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:11.911 23:10:40 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:11.911 23:10:40 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:11.911 23:10:40 -- common/autotest_common.sh@1320 -- # shift 00:28:11.911 23:10:40 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:11.911 23:10:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:11.911 23:10:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:11.911 23:10:40 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:11.911 23:10:40 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:11.911 23:10:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:11.911 23:10:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:11.911 23:10:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:11.911 23:10:40 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:11.911 23:10:40 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:11.911 23:10:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:12.203 23:10:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:12.203 23:10:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:12.203 23:10:40 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:12.203 23:10:40 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:12.463 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:12.463 fio-3.35 00:28:12.463 Starting 1 thread 00:28:12.463 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.027 00:28:15.027 test: (groupid=0, jobs=1): err= 0: pid=59218: Sun Jun 9 23:10:42 2024 00:28:15.027 read: IOPS=14.6k, BW=56.9MiB/s (59.6MB/s)(114MiB/2004msec) 00:28:15.027 slat (usec): min=2, max=277, avg= 2.17, stdev= 2.31 00:28:15.027 clat (usec): min=2848, max=11291, avg=5038.40, stdev=643.27 00:28:15.027 lat (usec): min=2851, max=11297, avg=5040.57, stdev=643.50 00:28:15.027 clat percentiles (usec): 00:28:15.027 | 1.00th=[ 3752], 5.00th=[ 4178], 10.00th=[ 4359], 20.00th=[ 4555], 00:28:15.027 | 30.00th=[ 4752], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 5080], 00:28:15.027 | 70.00th=[ 5276], 80.00th=[ 5473], 90.00th=[ 5735], 95.00th=[ 6063], 00:28:15.027 | 99.00th=[ 6915], 99.50th=[ 7570], 99.90th=[10159], 99.95th=[10159], 00:28:15.027 | 99.99th=[11076] 00:28:15.027 bw ( KiB/s): min=56440, max=58944, per=99.95%, avg=58192.00, stdev=1189.73, samples=4 00:28:15.027 iops : min=14110, max=14736, avg=14548.00, stdev=297.43, samples=4 00:28:15.027 write: IOPS=14.6k, BW=57.0MiB/s (59.8MB/s)(114MiB/2004msec); 0 zone resets 00:28:15.027 slat (usec): min=2, max=258, avg= 2.26, stdev= 1.68 00:28:15.027 clat (usec): min=2060, max=8623, avg=3704.40, stdev=482.46 00:28:15.027 lat (usec): min=2062, max=8654, avg=3706.66, stdev=482.74 00:28:15.027 clat percentiles (usec): 00:28:15.027 | 1.00th=[ 2638], 5.00th=[ 2966], 10.00th=[ 3130], 20.00th=[ 3326], 00:28:15.027 | 30.00th=[ 3490], 40.00th=[ 3589], 50.00th=[ 3720], 60.00th=[ 3818], 00:28:15.027 | 70.00th=[ 3916], 80.00th=[ 4047], 90.00th=[ 4228], 95.00th=[ 4359], 00:28:15.027 | 99.00th=[ 4752], 99.50th=[ 5276], 99.90th=[ 7504], 99.95th=[ 7832], 00:28:15.027 | 99.99th=[ 8455] 00:28:15.027 bw ( KiB/s): min=56824, max=59304, per=100.00%, avg=58364.00, stdev=1113.25, samples=4 00:28:15.027 iops : min=14206, max=14826, avg=14591.00, stdev=278.31, samples=4 00:28:15.027 lat (msec) : 4=39.62%, 10=60.32%, 20=0.06% 00:28:15.028 cpu : usr=70.29%, sys=22.07%, ctx=20, majf=0, minf=6 00:28:15.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:15.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:15.028 issued rwts: total=29169,29236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:15.028 00:28:15.028 Run status group 0 (all jobs): 00:28:15.028 READ: bw=56.9MiB/s (59.6MB/s), 56.9MiB/s-56.9MiB/s (59.6MB/s-59.6MB/s), io=114MiB (119MB), run=2004-2004msec 00:28:15.028 WRITE: bw=57.0MiB/s (59.8MB/s), 57.0MiB/s-57.0MiB/s (59.8MB/s-59.8MB/s), io=114MiB (120MB), run=2004-2004msec 00:28:15.028 23:10:42 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:15.028 23:10:42 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:15.028 23:10:42 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:15.028 23:10:42 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:15.028 23:10:42 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:15.028 23:10:42 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:15.028 23:10:42 -- common/autotest_common.sh@1320 -- # shift 00:28:15.028 23:10:42 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:15.028 23:10:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:15.028 23:10:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:15.028 23:10:42 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:15.028 23:10:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:15.028 23:10:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:15.028 23:10:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:15.028 23:10:42 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:15.028 23:10:42 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:15.028 23:10:42 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:15.028 23:10:42 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:15.028 23:10:42 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:15.028 23:10:42 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:15.028 23:10:42 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:15.028 23:10:42 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:15.288 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:15.288 fio-3.35 00:28:15.288 Starting 1 thread 00:28:15.288 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.836 00:28:17.836 test: (groupid=0, jobs=1): err= 0: pid=59902: Sun Jun 9 23:10:45 2024 00:28:17.836 read: IOPS=8245, BW=129MiB/s (135MB/s)(258MiB/2005msec) 00:28:17.836 slat (usec): min=3, max=112, avg= 3.62, stdev= 1.66 00:28:17.836 clat (usec): min=2954, max=36551, avg=9445.95, stdev=3134.87 00:28:17.836 lat (usec): min=2957, max=36554, avg=9449.57, stdev=3135.40 00:28:17.836 clat percentiles (usec): 00:28:17.836 | 1.00th=[ 4555], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 6980], 00:28:17.836 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8979], 60.00th=[ 9634], 00:28:17.836 | 70.00th=[10421], 80.00th=[11207], 90.00th=[12911], 95.00th=[15401], 00:28:17.836 | 99.00th=[20579], 99.50th=[24773], 99.90th=[25297], 99.95th=[25297], 00:28:17.836 | 99.99th=[32375] 00:28:17.836 bw ( KiB/s): min=57280, max=89120, per=52.47%, avg=69216.00, stdev=15207.62, samples=4 00:28:17.836 iops : min= 3580, max= 5570, avg=4326.00, stdev=950.48, samples=4 00:28:17.836 write: IOPS=4986, BW=77.9MiB/s (81.7MB/s)(141MiB/1811msec); 0 zone resets 00:28:17.836 slat (usec): min=40, max=451, avg=41.09, stdev= 8.37 00:28:17.836 clat (usec): min=3333, max=28714, avg=10093.19, stdev=2880.89 00:28:17.836 lat (usec): min=3373, max=28758, avg=10134.28, stdev=2884.28 00:28:17.836 clat percentiles (usec): 00:28:17.836 | 1.00th=[ 6390], 5.00th=[ 7308], 10.00th=[ 7701], 20.00th=[ 8225], 00:28:17.836 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10159], 00:28:17.836 | 70.00th=[10552], 80.00th=[11207], 90.00th=[12125], 95.00th=[13304], 00:28:17.836 | 99.00th=[25297], 99.50th=[26346], 99.90th=[27657], 99.95th=[28443], 00:28:17.836 | 99.99th=[28705] 00:28:17.836 bw ( KiB/s): min=59040, max=91680, per=89.96%, avg=71768.00, stdev=15436.62, samples=4 00:28:17.836 iops : min= 3690, max= 5730, avg=4485.50, stdev=964.79, samples=4 00:28:17.836 lat (msec) : 4=0.25%, 10=62.09%, 20=36.05%, 50=1.61% 00:28:17.836 cpu : usr=80.34%, sys=14.02%, ctx=14, majf=0, minf=15 00:28:17.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:28:17.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:17.836 issued rwts: total=16532,9030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:17.836 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:17.836 00:28:17.836 Run status group 0 (all jobs): 00:28:17.836 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=258MiB (271MB), run=2005-2005msec 00:28:17.836 WRITE: bw=77.9MiB/s (81.7MB/s), 77.9MiB/s-77.9MiB/s (81.7MB/s-81.7MB/s), io=141MiB (148MB), run=1811-1811msec 00:28:17.836 23:10:45 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:17.836 23:10:45 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:17.836 23:10:45 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:17.836 23:10:45 -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:17.836 23:10:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:17.836 23:10:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:17.836 23:10:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:17.836 23:10:45 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:17.836 23:10:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:17.836 23:10:45 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:28:17.836 23:10:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:28:17.836 23:10:45 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:28:18.096 Nvme0n1 00:28:18.357 23:10:46 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:18.929 23:10:46 -- host/fio.sh@53 -- # ls_guid=6589101c-8663-4a8b-9943-d9d4552256e7 00:28:18.929 23:10:46 -- host/fio.sh@54 -- # get_lvs_free_mb 6589101c-8663-4a8b-9943-d9d4552256e7 00:28:18.929 23:10:46 -- common/autotest_common.sh@1343 -- # local lvs_uuid=6589101c-8663-4a8b-9943-d9d4552256e7 00:28:18.929 23:10:46 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:18.929 23:10:46 -- common/autotest_common.sh@1345 -- # local fc 00:28:18.929 23:10:46 -- common/autotest_common.sh@1346 -- # local cs 00:28:18.929 23:10:46 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:18.929 23:10:47 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:18.929 { 00:28:18.929 "uuid": "6589101c-8663-4a8b-9943-d9d4552256e7", 00:28:18.929 "name": "lvs_0", 00:28:18.929 "base_bdev": "Nvme0n1", 00:28:18.929 "total_data_clusters": 1787, 00:28:18.929 "free_clusters": 1787, 00:28:18.929 "block_size": 512, 00:28:18.929 "cluster_size": 1073741824 00:28:18.929 } 00:28:18.929 ]' 00:28:18.929 23:10:47 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="6589101c-8663-4a8b-9943-d9d4552256e7") .free_clusters' 00:28:18.929 23:10:47 -- common/autotest_common.sh@1348 -- # fc=1787 00:28:18.929 23:10:47 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="6589101c-8663-4a8b-9943-d9d4552256e7") .cluster_size' 00:28:19.191 23:10:47 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:28:19.191 23:10:47 -- common/autotest_common.sh@1352 -- # free_mb=1829888 00:28:19.191 23:10:47 -- common/autotest_common.sh@1353 -- # echo 1829888 00:28:19.191 1829888 00:28:19.191 23:10:47 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:28:19.191 d4d616ed-55cb-4609-af67-c913d9f76554 00:28:19.191 23:10:47 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:19.452 23:10:47 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:19.452 23:10:47 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:19.713 23:10:47 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:19.713 23:10:47 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:19.713 23:10:47 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:19.713 23:10:47 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:19.713 23:10:47 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:19.713 23:10:47 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:19.713 23:10:47 -- common/autotest_common.sh@1320 -- # shift 00:28:19.713 23:10:47 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:19.713 23:10:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:19.713 23:10:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:19.713 23:10:47 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:19.713 23:10:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:19.713 23:10:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:19.713 23:10:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:19.713 23:10:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:19.713 23:10:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:19.713 23:10:47 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:19.713 23:10:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:19.713 23:10:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:19.713 23:10:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:19.713 23:10:47 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:19.713 23:10:47 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:19.975 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:19.975 fio-3.35 00:28:19.975 Starting 1 thread 00:28:19.975 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.521 00:28:22.521 test: (groupid=0, jobs=1): err= 0: pid=61110: Sun Jun 9 23:10:50 2024 00:28:22.521 read: IOPS=7599, BW=29.7MiB/s (31.1MB/s)(59.5MiB/2006msec) 00:28:22.521 slat (usec): min=2, max=108, avg= 2.26, stdev= 1.22 00:28:22.521 clat (usec): min=5187, max=16007, avg=9534.77, stdev=1341.73 00:28:22.521 lat (usec): min=5202, max=16009, avg=9537.03, stdev=1341.72 00:28:22.521 clat percentiles (usec): 00:28:22.521 | 1.00th=[ 6718], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 8586], 00:28:22.521 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:28:22.521 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11207], 95.00th=[12125], 00:28:22.521 | 99.00th=[13698], 99.50th=[14222], 99.90th=[15795], 99.95th=[15926], 00:28:22.521 | 99.99th=[16057] 00:28:22.521 bw ( KiB/s): min=29312, max=31200, per=99.82%, avg=30342.00, stdev=823.06, samples=4 00:28:22.521 iops : min= 7328, max= 7800, avg=7585.50, stdev=205.77, samples=4 00:28:22.521 write: IOPS=7588, BW=29.6MiB/s (31.1MB/s)(59.5MiB/2006msec); 0 zone resets 00:28:22.521 slat (nsec): min=2123, max=101541, avg=2367.22, stdev=883.68 00:28:22.521 clat (usec): min=1803, max=11285, avg=7225.60, stdev=935.53 00:28:22.521 lat (usec): min=1810, max=11287, avg=7227.97, stdev=935.54 00:28:22.521 clat percentiles (usec): 00:28:22.521 | 1.00th=[ 4752], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6521], 00:28:22.521 | 30.00th=[ 6849], 40.00th=[ 7046], 50.00th=[ 7242], 60.00th=[ 7504], 00:28:22.521 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8356], 95.00th=[ 8717], 00:28:22.521 | 99.00th=[ 9372], 99.50th=[ 9765], 99.90th=[10421], 99.95th=[10552], 00:28:22.521 | 99.99th=[11338] 00:28:22.521 bw ( KiB/s): min=30120, max=30480, per=99.94%, avg=30336.00, stdev=155.54, samples=4 00:28:22.521 iops : min= 7530, max= 7620, avg=7584.00, stdev=38.88, samples=4 00:28:22.521 lat (msec) : 2=0.01%, 4=0.08%, 10=85.38%, 20=14.54% 00:28:22.521 cpu : usr=65.44%, sys=28.43%, ctx=48, majf=0, minf=6 00:28:22.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:22.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:22.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:22.521 issued rwts: total=15244,15222,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:22.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:22.521 00:28:22.521 Run status group 0 (all jobs): 00:28:22.521 READ: bw=29.7MiB/s (31.1MB/s), 29.7MiB/s-29.7MiB/s (31.1MB/s-31.1MB/s), io=59.5MiB (62.4MB), run=2006-2006msec 00:28:22.521 WRITE: bw=29.6MiB/s (31.1MB/s), 29.6MiB/s-29.6MiB/s (31.1MB/s-31.1MB/s), io=59.5MiB (62.3MB), run=2006-2006msec 00:28:22.521 23:10:50 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:22.521 23:10:50 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:28:23.464 23:10:51 -- host/fio.sh@64 -- # ls_nested_guid=86e9af80-34c0-40ba-8129-a63bf5979e13 00:28:23.464 23:10:51 -- host/fio.sh@65 -- # get_lvs_free_mb 86e9af80-34c0-40ba-8129-a63bf5979e13 00:28:23.464 23:10:51 -- common/autotest_common.sh@1343 -- # local lvs_uuid=86e9af80-34c0-40ba-8129-a63bf5979e13 00:28:23.464 23:10:51 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:23.464 23:10:51 -- common/autotest_common.sh@1345 -- # local fc 00:28:23.464 23:10:51 -- common/autotest_common.sh@1346 -- # local cs 00:28:23.464 23:10:51 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:23.464 23:10:51 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:23.464 { 00:28:23.464 "uuid": "6589101c-8663-4a8b-9943-d9d4552256e7", 00:28:23.464 "name": "lvs_0", 00:28:23.464 "base_bdev": "Nvme0n1", 00:28:23.464 "total_data_clusters": 1787, 00:28:23.464 "free_clusters": 0, 00:28:23.464 "block_size": 512, 00:28:23.464 "cluster_size": 1073741824 00:28:23.464 }, 00:28:23.464 { 00:28:23.464 "uuid": "86e9af80-34c0-40ba-8129-a63bf5979e13", 00:28:23.464 "name": "lvs_n_0", 00:28:23.464 "base_bdev": "d4d616ed-55cb-4609-af67-c913d9f76554", 00:28:23.464 "total_data_clusters": 457025, 00:28:23.464 "free_clusters": 457025, 00:28:23.464 "block_size": 512, 00:28:23.464 "cluster_size": 4194304 00:28:23.464 } 00:28:23.464 ]' 00:28:23.464 23:10:51 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="86e9af80-34c0-40ba-8129-a63bf5979e13") .free_clusters' 00:28:23.725 23:10:51 -- common/autotest_common.sh@1348 -- # fc=457025 00:28:23.725 23:10:51 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="86e9af80-34c0-40ba-8129-a63bf5979e13") .cluster_size' 00:28:23.725 23:10:51 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:23.725 23:10:51 -- common/autotest_common.sh@1352 -- # free_mb=1828100 00:28:23.725 23:10:51 -- common/autotest_common.sh@1353 -- # echo 1828100 00:28:23.725 1828100 00:28:23.725 23:10:51 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:28:24.667 a1c8916d-6c54-4d24-b610-8afd9a3a195a 00:28:24.667 23:10:52 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:28:24.928 23:10:52 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:28:24.928 23:10:53 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:28:25.189 23:10:53 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:25.189 23:10:53 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:25.189 23:10:53 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:25.189 23:10:53 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:25.189 23:10:53 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:25.189 23:10:53 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:25.189 23:10:53 -- common/autotest_common.sh@1320 -- # shift 00:28:25.189 23:10:53 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:25.189 23:10:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:25.189 23:10:53 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:25.189 23:10:53 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:25.189 23:10:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:25.189 23:10:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:25.189 23:10:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:25.189 23:10:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:25.189 23:10:53 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:25.189 23:10:53 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:25.189 23:10:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:25.189 23:10:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:25.189 23:10:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:25.189 23:10:53 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:25.189 23:10:53 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:25.450 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:25.450 fio-3.35 00:28:25.450 Starting 1 thread 00:28:25.450 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.995 00:28:27.995 test: (groupid=0, jobs=1): err= 0: pid=62298: Sun Jun 9 23:10:55 2024 00:28:27.995 read: IOPS=6680, BW=26.1MiB/s (27.4MB/s)(52.4MiB/2008msec) 00:28:27.995 slat (usec): min=2, max=107, avg= 2.26, stdev= 1.24 00:28:27.995 clat (usec): min=3697, max=17948, avg=10754.23, stdev=1237.43 00:28:27.995 lat (usec): min=3715, max=17951, avg=10756.48, stdev=1237.37 00:28:27.995 clat percentiles (usec): 00:28:27.995 | 1.00th=[ 8225], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:28:27.995 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:28:27.995 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12256], 95.00th=[12911], 00:28:27.995 | 99.00th=[14353], 99.50th=[15008], 99.90th=[16450], 99.95th=[16712], 00:28:27.995 | 99.99th=[17695] 00:28:27.995 bw ( KiB/s): min=25600, max=27344, per=99.85%, avg=26684.00, stdev=763.05, samples=4 00:28:27.995 iops : min= 6400, max= 6836, avg=6671.00, stdev=190.76, samples=4 00:28:27.995 write: IOPS=6684, BW=26.1MiB/s (27.4MB/s)(52.4MiB/2008msec); 0 zone resets 00:28:27.995 slat (nsec): min=2132, max=97968, avg=2371.77, stdev=898.27 00:28:27.995 clat (usec): min=1584, max=15013, avg=8315.38, stdev=984.98 00:28:27.995 lat (usec): min=1591, max=15015, avg=8317.75, stdev=984.96 00:28:27.995 clat percentiles (usec): 00:28:27.995 | 1.00th=[ 5866], 5.00th=[ 6718], 10.00th=[ 7111], 20.00th=[ 7570], 00:28:27.995 | 30.00th=[ 7898], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8586], 00:28:27.995 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9503], 95.00th=[ 9765], 00:28:27.995 | 99.00th=[10421], 99.50th=[10814], 99.90th=[13698], 99.95th=[14877], 00:28:27.995 | 99.99th=[15008] 00:28:27.995 bw ( KiB/s): min=26624, max=26944, per=100.00%, avg=26736.00, stdev=141.91, samples=4 00:28:27.995 iops : min= 6656, max= 6736, avg=6684.00, stdev=35.48, samples=4 00:28:27.995 lat (msec) : 2=0.01%, 4=0.08%, 10=61.20%, 20=38.72% 00:28:27.995 cpu : usr=60.34%, sys=32.84%, ctx=64, majf=0, minf=6 00:28:27.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:27.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:27.995 issued rwts: total=13415,13422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:27.995 00:28:27.995 Run status group 0 (all jobs): 00:28:27.995 READ: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=52.4MiB (54.9MB), run=2008-2008msec 00:28:27.995 WRITE: bw=26.1MiB/s (27.4MB/s), 26.1MiB/s-26.1MiB/s (27.4MB/s-27.4MB/s), io=52.4MiB (55.0MB), run=2008-2008msec 00:28:27.995 23:10:55 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:27.995 23:10:56 -- host/fio.sh@74 -- # sync 00:28:27.995 23:10:56 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:28:30.538 23:10:58 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:30.538 23:10:58 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:28:30.799 23:10:58 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:30.799 23:10:58 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:28:33.390 23:11:00 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:33.390 23:11:00 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:33.390 23:11:00 -- host/fio.sh@86 -- # nvmftestfini 00:28:33.390 23:11:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:33.390 23:11:00 -- nvmf/common.sh@116 -- # sync 00:28:33.390 23:11:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:33.390 23:11:00 -- nvmf/common.sh@119 -- # set +e 00:28:33.390 23:11:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:33.390 23:11:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:33.390 rmmod nvme_tcp 00:28:33.390 rmmod nvme_fabrics 00:28:33.390 rmmod nvme_keyring 00:28:33.390 23:11:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:33.390 23:11:01 -- nvmf/common.sh@123 -- # set -e 00:28:33.390 23:11:01 -- nvmf/common.sh@124 -- # return 0 00:28:33.390 23:11:01 -- nvmf/common.sh@477 -- # '[' -n 58536 ']' 00:28:33.390 23:11:01 -- nvmf/common.sh@478 -- # killprocess 58536 00:28:33.390 23:11:01 -- common/autotest_common.sh@926 -- # '[' -z 58536 ']' 00:28:33.390 23:11:01 -- common/autotest_common.sh@930 -- # kill -0 58536 00:28:33.390 23:11:01 -- common/autotest_common.sh@931 -- # uname 00:28:33.390 23:11:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:33.390 23:11:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 58536 00:28:33.390 23:11:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:33.390 23:11:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:33.390 23:11:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 58536' 00:28:33.390 killing process with pid 58536 00:28:33.390 23:11:01 -- common/autotest_common.sh@945 -- # kill 58536 00:28:33.390 23:11:01 -- common/autotest_common.sh@950 -- # wait 58536 00:28:33.390 23:11:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:33.390 23:11:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:33.390 23:11:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:33.390 23:11:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:33.390 23:11:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:33.390 23:11:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.390 23:11:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.390 23:11:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.304 23:11:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:35.304 00:28:35.304 real 0m32.209s 00:28:35.304 user 2m40.253s 00:28:35.304 sys 0m9.480s 00:28:35.304 23:11:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:35.304 23:11:03 -- common/autotest_common.sh@10 -- # set +x 00:28:35.304 ************************************ 00:28:35.304 END TEST nvmf_fio_host 00:28:35.304 ************************************ 00:28:35.304 23:11:03 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:35.304 23:11:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:35.304 23:11:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:35.304 23:11:03 -- common/autotest_common.sh@10 -- # set +x 00:28:35.304 ************************************ 00:28:35.304 START TEST nvmf_failover 00:28:35.304 ************************************ 00:28:35.304 23:11:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:35.304 * Looking for test storage... 00:28:35.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:35.304 23:11:03 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:35.304 23:11:03 -- nvmf/common.sh@7 -- # uname -s 00:28:35.304 23:11:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:35.304 23:11:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:35.304 23:11:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:35.304 23:11:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:35.304 23:11:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:35.304 23:11:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:35.304 23:11:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:35.304 23:11:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:35.304 23:11:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:35.304 23:11:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:35.304 23:11:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:35.304 23:11:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:35.304 23:11:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:35.304 23:11:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:35.304 23:11:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:35.304 23:11:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:35.304 23:11:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:35.304 23:11:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.304 23:11:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.304 23:11:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.305 23:11:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.305 23:11:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.305 23:11:03 -- paths/export.sh@5 -- # export PATH 00:28:35.305 23:11:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.305 23:11:03 -- nvmf/common.sh@46 -- # : 0 00:28:35.305 23:11:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:35.305 23:11:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:35.305 23:11:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:35.305 23:11:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:35.305 23:11:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:35.305 23:11:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:35.305 23:11:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:35.305 23:11:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:35.305 23:11:03 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:35.305 23:11:03 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:35.305 23:11:03 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:35.305 23:11:03 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:35.305 23:11:03 -- host/failover.sh@18 -- # nvmftestinit 00:28:35.305 23:11:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:35.305 23:11:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:35.305 23:11:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:35.305 23:11:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:35.305 23:11:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:35.305 23:11:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.305 23:11:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:35.305 23:11:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.305 23:11:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:35.305 23:11:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:35.305 23:11:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:35.305 23:11:03 -- common/autotest_common.sh@10 -- # set +x 00:28:41.891 23:11:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:41.891 23:11:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:41.891 23:11:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:41.891 23:11:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:41.891 23:11:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:41.891 23:11:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:41.891 23:11:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:41.891 23:11:10 -- nvmf/common.sh@294 -- # net_devs=() 00:28:41.891 23:11:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:41.891 23:11:10 -- nvmf/common.sh@295 -- # e810=() 00:28:41.891 23:11:10 -- nvmf/common.sh@295 -- # local -ga e810 00:28:41.891 23:11:10 -- nvmf/common.sh@296 -- # x722=() 00:28:41.891 23:11:10 -- nvmf/common.sh@296 -- # local -ga x722 00:28:41.891 23:11:10 -- nvmf/common.sh@297 -- # mlx=() 00:28:41.891 23:11:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:41.891 23:11:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.891 23:11:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.891 23:11:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.891 23:11:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.891 23:11:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.891 23:11:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.891 23:11:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.891 23:11:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.891 23:11:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.891 23:11:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.891 23:11:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.891 23:11:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:41.891 23:11:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:41.891 23:11:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:41.891 23:11:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:41.891 23:11:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:41.891 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:41.891 23:11:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:41.891 23:11:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:41.891 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:41.891 23:11:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:41.891 23:11:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:41.891 23:11:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.891 23:11:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:41.891 23:11:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.891 23:11:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:41.891 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:41.891 23:11:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.891 23:11:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:41.891 23:11:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.891 23:11:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:41.891 23:11:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.891 23:11:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:41.891 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:41.891 23:11:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.891 23:11:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:41.891 23:11:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:41.891 23:11:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:41.891 23:11:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:41.891 23:11:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.891 23:11:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.891 23:11:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.891 23:11:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:41.891 23:11:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.891 23:11:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.891 23:11:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:41.891 23:11:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.891 23:11:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.891 23:11:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:41.891 23:11:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:42.152 23:11:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.152 23:11:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.152 23:11:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.152 23:11:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.152 23:11:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:42.152 23:11:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.414 23:11:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.414 23:11:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.414 23:11:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:42.414 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.414 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:28:42.414 00:28:42.414 --- 10.0.0.2 ping statistics --- 00:28:42.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.414 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:28:42.414 23:11:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:28:42.414 00:28:42.414 --- 10.0.0.1 ping statistics --- 00:28:42.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.414 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:28:42.414 23:11:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.414 23:11:10 -- nvmf/common.sh@410 -- # return 0 00:28:42.414 23:11:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:42.414 23:11:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.414 23:11:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:42.414 23:11:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:42.414 23:11:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.414 23:11:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:42.414 23:11:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:42.414 23:11:10 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:42.414 23:11:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:42.414 23:11:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:42.414 23:11:10 -- common/autotest_common.sh@10 -- # set +x 00:28:42.414 23:11:10 -- nvmf/common.sh@469 -- # nvmfpid=67745 00:28:42.414 23:11:10 -- nvmf/common.sh@470 -- # waitforlisten 67745 00:28:42.414 23:11:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:42.414 23:11:10 -- common/autotest_common.sh@819 -- # '[' -z 67745 ']' 00:28:42.414 23:11:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.414 23:11:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:42.414 23:11:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.414 23:11:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:42.414 23:11:10 -- common/autotest_common.sh@10 -- # set +x 00:28:42.414 [2024-06-09 23:11:10.504523] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:42.414 [2024-06-09 23:11:10.504617] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.414 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.414 [2024-06-09 23:11:10.575000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:42.675 [2024-06-09 23:11:10.648804] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:42.675 [2024-06-09 23:11:10.648926] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.675 [2024-06-09 23:11:10.648934] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.675 [2024-06-09 23:11:10.648941] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.675 [2024-06-09 23:11:10.649052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.675 [2024-06-09 23:11:10.649208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.675 [2024-06-09 23:11:10.649209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:43.247 23:11:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:43.247 23:11:11 -- common/autotest_common.sh@852 -- # return 0 00:28:43.247 23:11:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:43.247 23:11:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:43.247 23:11:11 -- common/autotest_common.sh@10 -- # set +x 00:28:43.247 23:11:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.247 23:11:11 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:43.508 [2024-06-09 23:11:11.445577] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.508 23:11:11 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:43.508 Malloc0 00:28:43.508 23:11:11 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:43.769 23:11:11 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:44.030 23:11:11 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.030 [2024-06-09 23:11:12.097904] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.030 23:11:12 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:44.291 [2024-06-09 23:11:12.266366] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:44.291 23:11:12 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:44.291 [2024-06-09 23:11:12.430928] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:44.291 23:11:12 -- host/failover.sh@31 -- # bdevperf_pid=68233 00:28:44.291 23:11:12 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:44.291 23:11:12 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:44.291 23:11:12 -- host/failover.sh@34 -- # waitforlisten 68233 /var/tmp/bdevperf.sock 00:28:44.291 23:11:12 -- common/autotest_common.sh@819 -- # '[' -z 68233 ']' 00:28:44.291 23:11:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:44.291 23:11:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:44.291 23:11:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:44.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:44.291 23:11:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:44.291 23:11:12 -- common/autotest_common.sh@10 -- # set +x 00:28:45.234 23:11:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:45.234 23:11:13 -- common/autotest_common.sh@852 -- # return 0 00:28:45.234 23:11:13 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:45.495 NVMe0n1 00:28:45.495 23:11:13 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:46.068 00:28:46.068 23:11:14 -- host/failover.sh@39 -- # run_test_pid=68509 00:28:46.068 23:11:14 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:46.068 23:11:14 -- host/failover.sh@41 -- # sleep 1 00:28:47.013 23:11:15 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:47.013 [2024-06-09 23:11:15.145033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.013 [2024-06-09 23:11:15.145073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.013 [2024-06-09 23:11:15.145079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.013 [2024-06-09 23:11:15.145084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.013 [2024-06-09 23:11:15.145089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 [2024-06-09 23:11:15.145364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16316e0 is same with the state(5) to be set 00:28:47.014 23:11:15 -- host/failover.sh@45 -- # sleep 3 00:28:50.321 23:11:18 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:50.584 00:28:50.584 23:11:18 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:50.584 [2024-06-09 23:11:18.685615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 [2024-06-09 23:11:18.685991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1631ef0 is same with the state(5) to be set 00:28:50.584 23:11:18 -- host/failover.sh@50 -- # sleep 3 00:28:53.889 23:11:21 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:53.889 [2024-06-09 23:11:21.859780] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.889 23:11:21 -- host/failover.sh@55 -- # sleep 1 00:28:54.833 23:11:22 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:55.095 [2024-06-09 23:11:23.034090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.095 [2024-06-09 23:11:23.034131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.095 [2024-06-09 23:11:23.034138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.095 [2024-06-09 23:11:23.034145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.095 [2024-06-09 23:11:23.034153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.095 [2024-06-09 23:11:23.034160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.095 [2024-06-09 23:11:23.034166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.095 [2024-06-09 23:11:23.034173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.095 [2024-06-09 23:11:23.034179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 [2024-06-09 23:11:23.034681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dc6f0 is same with the state(5) to be set 00:28:55.096 23:11:23 -- host/failover.sh@59 -- # wait 68509 00:29:01.732 0 00:29:01.732 23:11:29 -- host/failover.sh@61 -- # killprocess 68233 00:29:01.732 23:11:29 -- common/autotest_common.sh@926 -- # '[' -z 68233 ']' 00:29:01.732 23:11:29 -- common/autotest_common.sh@930 -- # kill -0 68233 00:29:01.732 23:11:29 -- common/autotest_common.sh@931 -- # uname 00:29:01.732 23:11:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:01.732 23:11:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68233 00:29:01.732 23:11:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:01.732 23:11:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:01.732 23:11:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68233' 00:29:01.732 killing process with pid 68233 00:29:01.732 23:11:29 -- common/autotest_common.sh@945 -- # kill 68233 00:29:01.732 23:11:29 -- common/autotest_common.sh@950 -- # wait 68233 00:29:01.732 23:11:29 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:01.732 [2024-06-09 23:11:12.502474] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:01.732 [2024-06-09 23:11:12.502533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68233 ] 00:29:01.732 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.732 [2024-06-09 23:11:12.561305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.732 [2024-06-09 23:11:12.623465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.732 Running I/O for 15 seconds... 00:29:01.732 [2024-06-09 23:11:15.145841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.732 [2024-06-09 23:11:15.145875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.732 [2024-06-09 23:11:15.145893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.732 [2024-06-09 23:11:15.145902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.732 [2024-06-09 23:11:15.145912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.732 [2024-06-09 23:11:15.145920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.732 [2024-06-09 23:11:15.145929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.732 [2024-06-09 23:11:15.145936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.732 [2024-06-09 23:11:15.145945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.732 [2024-06-09 23:11:15.145952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.732 [2024-06-09 23:11:15.145962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.732 [2024-06-09 23:11:15.145969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.732 [2024-06-09 23:11:15.145978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.732 [2024-06-09 23:11:15.145985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.732 [2024-06-09 23:11:15.145994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.732 [2024-06-09 23:11:15.146001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.732 [2024-06-09 23:11:15.146011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.732 [2024-06-09 23:11:15.146017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.732 [2024-06-09 23:11:15.146027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.732 [2024-06-09 23:11:15.146034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.732 [2024-06-09 23:11:15.146043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.732 [2024-06-09 23:11:15.146050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.732 [2024-06-09 23:11:15.146064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.732 [2024-06-09 23:11:15.146072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.733 [2024-06-09 23:11:15.146320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.733 [2024-06-09 23:11:15.146336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.733 [2024-06-09 23:11:15.146353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.733 [2024-06-09 23:11:15.146385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.733 [2024-06-09 23:11:15.146405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.733 [2024-06-09 23:11:15.146605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.733 [2024-06-09 23:11:15.146654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.733 [2024-06-09 23:11:15.146670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.733 [2024-06-09 23:11:15.146688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.733 [2024-06-09 23:11:15.146720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.733 [2024-06-09 23:11:15.146736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.733 [2024-06-09 23:11:15.146745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.146752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.146761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.146769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.146778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.734 [2024-06-09 23:11:15.146785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.146793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.734 [2024-06-09 23:11:15.146800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.146810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.734 [2024-06-09 23:11:15.146817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.146826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.146833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.146842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.146849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.146858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.146865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.146874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.146881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.146890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.146899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.146908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.146915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.146925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.146932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.146941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.146948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.146958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.146965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.146974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.146981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.146990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.146997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.147014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.734 [2024-06-09 23:11:15.147031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.734 [2024-06-09 23:11:15.147046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.147063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.734 [2024-06-09 23:11:15.147079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.734 [2024-06-09 23:11:15.147095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.147114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.734 [2024-06-09 23:11:15.147130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.734 [2024-06-09 23:11:15.147145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.734 [2024-06-09 23:11:15.147162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.734 [2024-06-09 23:11:15.147178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.147193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.734 [2024-06-09 23:11:15.147210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.147226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.147241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.147258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.147274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.147290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.147308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.147325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.147341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.147357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.147373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.734 [2024-06-09 23:11:15.147389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.734 [2024-06-09 23:11:15.147398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.735 [2024-06-09 23:11:15.147425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.735 [2024-06-09 23:11:15.147458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.735 [2024-06-09 23:11:15.147491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.735 [2024-06-09 23:11:15.147541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.735 [2024-06-09 23:11:15.147736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.735 [2024-06-09 23:11:15.147753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.735 [2024-06-09 23:11:15.147768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.735 [2024-06-09 23:11:15.147784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.735 [2024-06-09 23:11:15.147799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.735 [2024-06-09 23:11:15.147832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.735 [2024-06-09 23:11:15.147864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.735 [2024-06-09 23:11:15.147965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.147986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:01.735 [2024-06-09 23:11:15.147993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:01.735 [2024-06-09 23:11:15.148001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36816 len:8 PRP1 0x0 PRP2 0x0 00:29:01.735 [2024-06-09 23:11:15.148008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.148044] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8566d0 was disconnected and freed. reset controller. 00:29:01.735 [2024-06-09 23:11:15.148058] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:01.735 [2024-06-09 23:11:15.148077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.735 [2024-06-09 23:11:15.148085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.148094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.735 [2024-06-09 23:11:15.148101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.735 [2024-06-09 23:11:15.148109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.735 [2024-06-09 23:11:15.148116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:15.148123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.736 [2024-06-09 23:11:15.148130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:15.148137] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.736 [2024-06-09 23:11:15.150507] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.736 [2024-06-09 23:11:15.150528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x846130 (9): Bad file descriptor 00:29:01.736 [2024-06-09 23:11:15.312833] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:01.736 [2024-06-09 23:11:18.686330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.736 [2024-06-09 23:11:18.686784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.736 [2024-06-09 23:11:18.686800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.736 [2024-06-09 23:11:18.686817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.736 [2024-06-09 23:11:18.686834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.736 [2024-06-09 23:11:18.686867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.736 [2024-06-09 23:11:18.686883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.736 [2024-06-09 23:11:18.686899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.736 [2024-06-09 23:11:18.686908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.686916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.686925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.686932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.686941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.686950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.686959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.686966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.686976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.686983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.686992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.686999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.687016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.687098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.687164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.687213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.687229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.687246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.687263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.687279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.687294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.687311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.687327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.687342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.687360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.687376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.737 [2024-06-09 23:11:18.687428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.737 [2024-06-09 23:11:18.687513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.737 [2024-06-09 23:11:18.687522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.687529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.687545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.687562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.687577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.687594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.687610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.687626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.687642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.687660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.687676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.687692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.687708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.687724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.687740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.687756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.687772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.687792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.687809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.687824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.687840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.687857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.687874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.687890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.687906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.687923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.687939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.687955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.687971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.687987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.687996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.688003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.688012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.688019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.688028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.688036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.688045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.688052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.688061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.688068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.688079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.688086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.688096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.688103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.688112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.688119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.688127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.688136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.688145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.688152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.688161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.738 [2024-06-09 23:11:18.688168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.738 [2024-06-09 23:11:18.688177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.738 [2024-06-09 23:11:18.688184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:18.688200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.739 [2024-06-09 23:11:18.688216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:18.688232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:18.688248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:18.688264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.739 [2024-06-09 23:11:18.688282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.739 [2024-06-09 23:11:18.688298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.739 [2024-06-09 23:11:18.688316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:18.688333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:18.688350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:18.688366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:18.688382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.739 [2024-06-09 23:11:18.688398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.739 [2024-06-09 23:11:18.688418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.739 [2024-06-09 23:11:18.688433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:18.688450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:18.688466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:01.739 [2024-06-09 23:11:18.688492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:01.739 [2024-06-09 23:11:18.688499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90408 len:8 PRP1 0x0 PRP2 0x0 00:29:01.739 [2024-06-09 23:11:18.688509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688545] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8725a0 was disconnected and freed. reset controller. 00:29:01.739 [2024-06-09 23:11:18.688554] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:01.739 [2024-06-09 23:11:18.688572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.739 [2024-06-09 23:11:18.688580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.739 [2024-06-09 23:11:18.688596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.739 [2024-06-09 23:11:18.688611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.739 [2024-06-09 23:11:18.688626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:18.688633] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.739 [2024-06-09 23:11:18.690976] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.739 [2024-06-09 23:11:18.691008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x846130 (9): Bad file descriptor 00:29:01.739 [2024-06-09 23:11:18.717146] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:01.739 [2024-06-09 23:11:23.034966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:23.035005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:23.035024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:23.035033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:23.035043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:23.035050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:23.035059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:23.035067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:23.035077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:23.035085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:23.035094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:23.035106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:23.035116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:23.035123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:23.035133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:23.035140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:23.035149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:23.035156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:23.035166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:23.035174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:23.035183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:23.035190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:23.035199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:23.035206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:23.035216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:23.035222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:23.035232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.739 [2024-06-09 23:11:23.035240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.739 [2024-06-09 23:11:23.035250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.740 [2024-06-09 23:11:23.035754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.740 [2024-06-09 23:11:23.035803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.740 [2024-06-09 23:11:23.035868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.740 [2024-06-09 23:11:23.035901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.740 [2024-06-09 23:11:23.035917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.740 [2024-06-09 23:11:23.035926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.035934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.035943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.741 [2024-06-09 23:11:23.035950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.035960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.035967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.035977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.035984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.035993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.741 [2024-06-09 23:11:23.036131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.741 [2024-06-09 23:11:23.036147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.741 [2024-06-09 23:11:23.036164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.741 [2024-06-09 23:11:23.036198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.741 [2024-06-09 23:11:23.036214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.741 [2024-06-09 23:11:23.036247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.741 [2024-06-09 23:11:23.036262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.741 [2024-06-09 23:11:23.036311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.741 [2024-06-09 23:11:23.036328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.741 [2024-06-09 23:11:23.036345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.741 [2024-06-09 23:11:23.036361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.741 [2024-06-09 23:11:23.036481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.741 [2024-06-09 23:11:23.036490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.742 [2024-06-09 23:11:23.036512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.742 [2024-06-09 23:11:23.036545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.742 [2024-06-09 23:11:23.036561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.742 [2024-06-09 23:11:23.036577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.742 [2024-06-09 23:11:23.036595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.742 [2024-06-09 23:11:23.036662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.742 [2024-06-09 23:11:23.036679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.742 [2024-06-09 23:11:23.036712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.742 [2024-06-09 23:11:23.036744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.742 [2024-06-09 23:11:23.036794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.742 [2024-06-09 23:11:23.036826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.742 [2024-06-09 23:11:23.036843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.742 [2024-06-09 23:11:23.036907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.036987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.036999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:01.742 [2024-06-09 23:11:23.037006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.037015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.037022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.037031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.037039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.037048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.037055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.037064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.037071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.037081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.037088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.037097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.037104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.037113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.742 [2024-06-09 23:11:23.037120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.742 [2024-06-09 23:11:23.037129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x85c0f0 is same with the state(5) to be set 00:29:01.742 [2024-06-09 23:11:23.037137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:01.742 [2024-06-09 23:11:23.037143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:01.742 [2024-06-09 23:11:23.037151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58696 len:8 PRP1 0x0 PRP2 0x0 00:29:01.743 [2024-06-09 23:11:23.037158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.743 [2024-06-09 23:11:23.037194] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x85c0f0 was disconnected and freed. reset controller. 00:29:01.743 [2024-06-09 23:11:23.037203] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:01.743 [2024-06-09 23:11:23.037222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.743 [2024-06-09 23:11:23.037231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.743 [2024-06-09 23:11:23.037239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.743 [2024-06-09 23:11:23.037249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.743 [2024-06-09 23:11:23.037257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.743 [2024-06-09 23:11:23.037264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.743 [2024-06-09 23:11:23.037272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:01.743 [2024-06-09 23:11:23.037279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.743 [2024-06-09 23:11:23.037286] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:01.743 [2024-06-09 23:11:23.039964] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:01.743 [2024-06-09 23:11:23.039992] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x846130 (9): Bad file descriptor 00:29:01.743 [2024-06-09 23:11:23.200048] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:01.743 00:29:01.743 Latency(us) 00:29:01.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.743 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:01.743 Verification LBA range: start 0x0 length 0x4000 00:29:01.743 NVMe0n1 : 15.00 16421.04 64.14 1331.71 0.00 7194.12 1092.27 21189.97 00:29:01.743 =================================================================================================================== 00:29:01.743 Total : 16421.04 64.14 1331.71 0.00 7194.12 1092.27 21189.97 00:29:01.743 Received shutdown signal, test time was about 15.000000 seconds 00:29:01.743 00:29:01.743 Latency(us) 00:29:01.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.743 =================================================================================================================== 00:29:01.743 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.743 23:11:29 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:01.743 23:11:29 -- host/failover.sh@65 -- # count=3 00:29:01.743 23:11:29 -- host/failover.sh@67 -- # (( count != 3 )) 00:29:01.743 23:11:29 -- host/failover.sh@73 -- # bdevperf_pid=71434 00:29:01.743 23:11:29 -- host/failover.sh@75 -- # waitforlisten 71434 /var/tmp/bdevperf.sock 00:29:01.743 23:11:29 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:01.743 23:11:29 -- common/autotest_common.sh@819 -- # '[' -z 71434 ']' 00:29:01.743 23:11:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:01.743 23:11:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:01.743 23:11:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:01.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:01.743 23:11:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:01.743 23:11:29 -- common/autotest_common.sh@10 -- # set +x 00:29:02.003 23:11:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:02.003 23:11:30 -- common/autotest_common.sh@852 -- # return 0 00:29:02.003 23:11:30 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:02.264 [2024-06-09 23:11:30.284280] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:02.264 23:11:30 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:02.525 [2024-06-09 23:11:30.448705] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:02.525 23:11:30 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:02.787 NVMe0n1 00:29:02.787 23:11:30 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:03.359 00:29:03.359 23:11:31 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:03.359 00:29:03.359 23:11:31 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:03.359 23:11:31 -- host/failover.sh@82 -- # grep -q NVMe0 00:29:03.619 23:11:31 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:03.881 23:11:31 -- host/failover.sh@87 -- # sleep 3 00:29:07.181 23:11:34 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:07.181 23:11:34 -- host/failover.sh@88 -- # grep -q NVMe0 00:29:07.181 23:11:35 -- host/failover.sh@90 -- # run_test_pid=72652 00:29:07.181 23:11:35 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:07.181 23:11:35 -- host/failover.sh@92 -- # wait 72652 00:29:08.122 0 00:29:08.122 23:11:36 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:08.122 [2024-06-09 23:11:29.377369] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:08.122 [2024-06-09 23:11:29.377442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71434 ] 00:29:08.122 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.122 [2024-06-09 23:11:29.435949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.122 [2024-06-09 23:11:29.497912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.122 [2024-06-09 23:11:31.831166] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:08.122 [2024-06-09 23:11:31.831210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.122 [2024-06-09 23:11:31.831221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.122 [2024-06-09 23:11:31.831230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.122 [2024-06-09 23:11:31.831238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.122 [2024-06-09 23:11:31.831245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.122 [2024-06-09 23:11:31.831252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.122 [2024-06-09 23:11:31.831260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:08.122 [2024-06-09 23:11:31.831267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.122 [2024-06-09 23:11:31.831274] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:08.122 [2024-06-09 23:11:31.831297] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:08.122 [2024-06-09 23:11:31.831311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc78130 (9): Bad file descriptor 00:29:08.122 [2024-06-09 23:11:32.004655] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:08.122 Running I/O for 1 seconds... 00:29:08.122 00:29:08.122 Latency(us) 00:29:08.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.122 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:08.122 Verification LBA range: start 0x0 length 0x4000 00:29:08.123 NVMe0n1 : 1.00 19722.91 77.04 0.00 0.00 6458.71 1631.57 16384.00 00:29:08.123 =================================================================================================================== 00:29:08.123 Total : 19722.91 77.04 0.00 0.00 6458.71 1631.57 16384.00 00:29:08.123 23:11:36 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:08.123 23:11:36 -- host/failover.sh@95 -- # grep -q NVMe0 00:29:08.383 23:11:36 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:08.383 23:11:36 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:08.383 23:11:36 -- host/failover.sh@99 -- # grep -q NVMe0 00:29:08.644 23:11:36 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:08.644 23:11:36 -- host/failover.sh@101 -- # sleep 3 00:29:11.950 23:11:39 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:11.950 23:11:39 -- host/failover.sh@103 -- # grep -q NVMe0 00:29:11.950 23:11:39 -- host/failover.sh@108 -- # killprocess 71434 00:29:11.950 23:11:39 -- common/autotest_common.sh@926 -- # '[' -z 71434 ']' 00:29:11.950 23:11:39 -- common/autotest_common.sh@930 -- # kill -0 71434 00:29:11.950 23:11:39 -- common/autotest_common.sh@931 -- # uname 00:29:11.950 23:11:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:11.950 23:11:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71434 00:29:11.950 23:11:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:11.950 23:11:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:11.950 23:11:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71434' 00:29:11.950 killing process with pid 71434 00:29:11.950 23:11:40 -- common/autotest_common.sh@945 -- # kill 71434 00:29:11.950 23:11:40 -- common/autotest_common.sh@950 -- # wait 71434 00:29:12.211 23:11:40 -- host/failover.sh@110 -- # sync 00:29:12.211 23:11:40 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:12.211 23:11:40 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:12.211 23:11:40 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:12.211 23:11:40 -- host/failover.sh@116 -- # nvmftestfini 00:29:12.211 23:11:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:12.211 23:11:40 -- nvmf/common.sh@116 -- # sync 00:29:12.211 23:11:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:12.211 23:11:40 -- nvmf/common.sh@119 -- # set +e 00:29:12.211 23:11:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:12.211 23:11:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:12.211 rmmod nvme_tcp 00:29:12.211 rmmod nvme_fabrics 00:29:12.211 rmmod nvme_keyring 00:29:12.211 23:11:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:12.211 23:11:40 -- nvmf/common.sh@123 -- # set -e 00:29:12.211 23:11:40 -- nvmf/common.sh@124 -- # return 0 00:29:12.211 23:11:40 -- nvmf/common.sh@477 -- # '[' -n 67745 ']' 00:29:12.211 23:11:40 -- nvmf/common.sh@478 -- # killprocess 67745 00:29:12.211 23:11:40 -- common/autotest_common.sh@926 -- # '[' -z 67745 ']' 00:29:12.211 23:11:40 -- common/autotest_common.sh@930 -- # kill -0 67745 00:29:12.211 23:11:40 -- common/autotest_common.sh@931 -- # uname 00:29:12.211 23:11:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:12.211 23:11:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67745 00:29:12.473 23:11:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:12.473 23:11:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:12.473 23:11:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67745' 00:29:12.473 killing process with pid 67745 00:29:12.473 23:11:40 -- common/autotest_common.sh@945 -- # kill 67745 00:29:12.473 23:11:40 -- common/autotest_common.sh@950 -- # wait 67745 00:29:12.473 23:11:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:12.473 23:11:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:12.473 23:11:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:12.473 23:11:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:12.473 23:11:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:12.473 23:11:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.473 23:11:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:12.473 23:11:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.023 23:11:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:15.023 00:29:15.023 real 0m39.309s 00:29:15.023 user 2m2.152s 00:29:15.023 sys 0m7.837s 00:29:15.023 23:11:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:15.023 23:11:42 -- common/autotest_common.sh@10 -- # set +x 00:29:15.023 ************************************ 00:29:15.023 END TEST nvmf_failover 00:29:15.023 ************************************ 00:29:15.023 23:11:42 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:15.023 23:11:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:15.023 23:11:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:15.023 23:11:42 -- common/autotest_common.sh@10 -- # set +x 00:29:15.023 ************************************ 00:29:15.023 START TEST nvmf_discovery 00:29:15.023 ************************************ 00:29:15.023 23:11:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:15.023 * Looking for test storage... 00:29:15.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:15.023 23:11:42 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.023 23:11:42 -- nvmf/common.sh@7 -- # uname -s 00:29:15.023 23:11:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.023 23:11:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.023 23:11:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.023 23:11:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.023 23:11:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.023 23:11:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.023 23:11:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.023 23:11:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.023 23:11:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.023 23:11:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.023 23:11:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:15.023 23:11:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:15.023 23:11:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.023 23:11:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.023 23:11:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.023 23:11:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.023 23:11:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.023 23:11:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.023 23:11:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.023 23:11:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.023 23:11:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.023 23:11:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.023 23:11:42 -- paths/export.sh@5 -- # export PATH 00:29:15.023 23:11:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.023 23:11:42 -- nvmf/common.sh@46 -- # : 0 00:29:15.023 23:11:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:15.023 23:11:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:15.023 23:11:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:15.023 23:11:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.023 23:11:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.023 23:11:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:15.023 23:11:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:15.023 23:11:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:15.023 23:11:42 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:15.023 23:11:42 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:15.023 23:11:42 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:15.023 23:11:42 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:15.023 23:11:42 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:15.023 23:11:42 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:15.023 23:11:42 -- host/discovery.sh@25 -- # nvmftestinit 00:29:15.023 23:11:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:15.023 23:11:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.023 23:11:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:15.023 23:11:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:15.023 23:11:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:15.023 23:11:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.024 23:11:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:15.024 23:11:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.024 23:11:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:15.024 23:11:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:15.024 23:11:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:15.024 23:11:42 -- common/autotest_common.sh@10 -- # set +x 00:29:21.627 23:11:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:21.627 23:11:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:21.627 23:11:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:21.627 23:11:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:21.627 23:11:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:21.627 23:11:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:21.627 23:11:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:21.627 23:11:49 -- nvmf/common.sh@294 -- # net_devs=() 00:29:21.627 23:11:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:21.627 23:11:49 -- nvmf/common.sh@295 -- # e810=() 00:29:21.627 23:11:49 -- nvmf/common.sh@295 -- # local -ga e810 00:29:21.627 23:11:49 -- nvmf/common.sh@296 -- # x722=() 00:29:21.627 23:11:49 -- nvmf/common.sh@296 -- # local -ga x722 00:29:21.627 23:11:49 -- nvmf/common.sh@297 -- # mlx=() 00:29:21.627 23:11:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:21.627 23:11:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.627 23:11:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.627 23:11:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.627 23:11:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.627 23:11:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.627 23:11:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.627 23:11:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.627 23:11:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.627 23:11:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.627 23:11:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.627 23:11:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.627 23:11:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:21.627 23:11:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:21.627 23:11:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:21.627 23:11:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:21.627 23:11:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:21.627 23:11:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:21.627 23:11:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:21.627 23:11:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:21.627 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:21.627 23:11:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:21.627 23:11:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:21.627 23:11:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.627 23:11:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.627 23:11:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:21.627 23:11:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:21.627 23:11:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:21.627 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:21.627 23:11:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:21.627 23:11:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:21.628 23:11:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.628 23:11:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.628 23:11:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:21.628 23:11:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:21.628 23:11:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:21.628 23:11:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:21.628 23:11:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:21.628 23:11:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.628 23:11:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:21.628 23:11:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.628 23:11:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:21.628 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:21.628 23:11:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.628 23:11:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:21.628 23:11:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.628 23:11:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:21.628 23:11:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.628 23:11:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:21.628 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:21.628 23:11:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.628 23:11:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:21.628 23:11:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:21.628 23:11:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:21.628 23:11:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:21.628 23:11:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:21.628 23:11:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.628 23:11:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.628 23:11:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.628 23:11:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:21.628 23:11:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.628 23:11:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.628 23:11:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:21.628 23:11:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.628 23:11:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.628 23:11:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:21.628 23:11:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:21.628 23:11:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.628 23:11:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.628 23:11:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.628 23:11:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.628 23:11:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:21.628 23:11:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.890 23:11:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.890 23:11:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.890 23:11:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:21.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:29:21.890 00:29:21.890 --- 10.0.0.2 ping statistics --- 00:29:21.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.890 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:29:21.890 23:11:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.416 ms 00:29:21.890 00:29:21.890 --- 10.0.0.1 ping statistics --- 00:29:21.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.890 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:29:21.890 23:11:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.890 23:11:49 -- nvmf/common.sh@410 -- # return 0 00:29:21.890 23:11:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:21.890 23:11:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.890 23:11:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:21.890 23:11:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:21.890 23:11:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.890 23:11:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:21.890 23:11:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:21.890 23:11:49 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:21.890 23:11:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:21.890 23:11:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:21.890 23:11:49 -- common/autotest_common.sh@10 -- # set +x 00:29:21.890 23:11:49 -- nvmf/common.sh@469 -- # nvmfpid=77833 00:29:21.890 23:11:49 -- nvmf/common.sh@470 -- # waitforlisten 77833 00:29:21.890 23:11:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:21.890 23:11:49 -- common/autotest_common.sh@819 -- # '[' -z 77833 ']' 00:29:21.890 23:11:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.890 23:11:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:21.890 23:11:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.890 23:11:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:21.890 23:11:49 -- common/autotest_common.sh@10 -- # set +x 00:29:21.890 [2024-06-09 23:11:50.042608] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:21.890 [2024-06-09 23:11:50.042694] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.152 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.152 [2024-06-09 23:11:50.113882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.152 [2024-06-09 23:11:50.184973] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:22.152 [2024-06-09 23:11:50.185093] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.152 [2024-06-09 23:11:50.185101] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.152 [2024-06-09 23:11:50.185108] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.152 [2024-06-09 23:11:50.185136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.726 23:11:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:22.726 23:11:50 -- common/autotest_common.sh@852 -- # return 0 00:29:22.726 23:11:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:22.726 23:11:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:22.726 23:11:50 -- common/autotest_common.sh@10 -- # set +x 00:29:22.726 23:11:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.726 23:11:50 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:22.726 23:11:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.726 23:11:50 -- common/autotest_common.sh@10 -- # set +x 00:29:22.726 [2024-06-09 23:11:50.843701] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.726 23:11:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.726 23:11:50 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:22.726 23:11:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.726 23:11:50 -- common/autotest_common.sh@10 -- # set +x 00:29:22.726 [2024-06-09 23:11:50.855845] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:22.726 23:11:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.726 23:11:50 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:22.726 23:11:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.726 23:11:50 -- common/autotest_common.sh@10 -- # set +x 00:29:22.726 null0 00:29:22.726 23:11:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.726 23:11:50 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:22.726 23:11:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.726 23:11:50 -- common/autotest_common.sh@10 -- # set +x 00:29:22.726 null1 00:29:22.726 23:11:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.726 23:11:50 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:22.726 23:11:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.726 23:11:50 -- common/autotest_common.sh@10 -- # set +x 00:29:22.726 23:11:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.726 23:11:50 -- host/discovery.sh@45 -- # hostpid=78067 00:29:22.726 23:11:50 -- host/discovery.sh@46 -- # waitforlisten 78067 /tmp/host.sock 00:29:22.726 23:11:50 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:22.726 23:11:50 -- common/autotest_common.sh@819 -- # '[' -z 78067 ']' 00:29:22.726 23:11:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:29:22.726 23:11:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:22.726 23:11:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:22.726 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:22.726 23:11:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:22.726 23:11:50 -- common/autotest_common.sh@10 -- # set +x 00:29:22.988 [2024-06-09 23:11:50.939031] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:22.988 [2024-06-09 23:11:50.939077] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78067 ] 00:29:22.988 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.988 [2024-06-09 23:11:50.996814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.988 [2024-06-09 23:11:51.059283] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:22.988 [2024-06-09 23:11:51.059413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.561 23:11:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:23.561 23:11:51 -- common/autotest_common.sh@852 -- # return 0 00:29:23.561 23:11:51 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:23.561 23:11:51 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:23.561 23:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.561 23:11:51 -- common/autotest_common.sh@10 -- # set +x 00:29:23.561 23:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.561 23:11:51 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:23.561 23:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.561 23:11:51 -- common/autotest_common.sh@10 -- # set +x 00:29:23.561 23:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.561 23:11:51 -- host/discovery.sh@72 -- # notify_id=0 00:29:23.561 23:11:51 -- host/discovery.sh@78 -- # get_subsystem_names 00:29:23.561 23:11:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:23.561 23:11:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:23.561 23:11:51 -- host/discovery.sh@59 -- # sort 00:29:23.561 23:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.561 23:11:51 -- host/discovery.sh@59 -- # xargs 00:29:23.561 23:11:51 -- common/autotest_common.sh@10 -- # set +x 00:29:23.561 23:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.823 23:11:51 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:29:23.823 23:11:51 -- host/discovery.sh@79 -- # get_bdev_list 00:29:23.823 23:11:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:23.823 23:11:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:23.823 23:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.823 23:11:51 -- host/discovery.sh@55 -- # sort 00:29:23.823 23:11:51 -- common/autotest_common.sh@10 -- # set +x 00:29:23.823 23:11:51 -- host/discovery.sh@55 -- # xargs 00:29:23.823 23:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.823 23:11:51 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:29:23.823 23:11:51 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:23.823 23:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.823 23:11:51 -- common/autotest_common.sh@10 -- # set +x 00:29:23.823 23:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.823 23:11:51 -- host/discovery.sh@82 -- # get_subsystem_names 00:29:23.823 23:11:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:23.823 23:11:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:23.823 23:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.823 23:11:51 -- common/autotest_common.sh@10 -- # set +x 00:29:23.823 23:11:51 -- host/discovery.sh@59 -- # sort 00:29:23.823 23:11:51 -- host/discovery.sh@59 -- # xargs 00:29:23.823 23:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.823 23:11:51 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:29:23.823 23:11:51 -- host/discovery.sh@83 -- # get_bdev_list 00:29:23.823 23:11:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:23.823 23:11:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:23.823 23:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.823 23:11:51 -- host/discovery.sh@55 -- # sort 00:29:23.823 23:11:51 -- common/autotest_common.sh@10 -- # set +x 00:29:23.823 23:11:51 -- host/discovery.sh@55 -- # xargs 00:29:23.823 23:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.823 23:11:51 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:23.823 23:11:51 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:23.823 23:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.823 23:11:51 -- common/autotest_common.sh@10 -- # set +x 00:29:23.823 23:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.823 23:11:51 -- host/discovery.sh@86 -- # get_subsystem_names 00:29:23.823 23:11:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:23.823 23:11:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:23.823 23:11:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.823 23:11:51 -- common/autotest_common.sh@10 -- # set +x 00:29:23.823 23:11:51 -- host/discovery.sh@59 -- # sort 00:29:23.823 23:11:51 -- host/discovery.sh@59 -- # xargs 00:29:23.823 23:11:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.823 23:11:51 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:29:24.085 23:11:52 -- host/discovery.sh@87 -- # get_bdev_list 00:29:24.085 23:11:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:24.085 23:11:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:24.085 23:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.085 23:11:52 -- host/discovery.sh@55 -- # sort 00:29:24.085 23:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:24.085 23:11:52 -- host/discovery.sh@55 -- # xargs 00:29:24.085 23:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.085 23:11:52 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:24.085 23:11:52 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:24.085 23:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.085 23:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:24.085 [2024-06-09 23:11:52.055013] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.085 23:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.085 23:11:52 -- host/discovery.sh@92 -- # get_subsystem_names 00:29:24.085 23:11:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:24.085 23:11:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:24.085 23:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.085 23:11:52 -- host/discovery.sh@59 -- # sort 00:29:24.085 23:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:24.085 23:11:52 -- host/discovery.sh@59 -- # xargs 00:29:24.085 23:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.085 23:11:52 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:24.085 23:11:52 -- host/discovery.sh@93 -- # get_bdev_list 00:29:24.085 23:11:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:24.085 23:11:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:24.085 23:11:52 -- host/discovery.sh@55 -- # sort 00:29:24.085 23:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.085 23:11:52 -- host/discovery.sh@55 -- # xargs 00:29:24.085 23:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:24.085 23:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.085 23:11:52 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:29:24.085 23:11:52 -- host/discovery.sh@94 -- # get_notification_count 00:29:24.085 23:11:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:24.085 23:11:52 -- host/discovery.sh@74 -- # jq '. | length' 00:29:24.085 23:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.085 23:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:24.085 23:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.085 23:11:52 -- host/discovery.sh@74 -- # notification_count=0 00:29:24.085 23:11:52 -- host/discovery.sh@75 -- # notify_id=0 00:29:24.085 23:11:52 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:29:24.085 23:11:52 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:24.085 23:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:24.085 23:11:52 -- common/autotest_common.sh@10 -- # set +x 00:29:24.085 23:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.085 23:11:52 -- host/discovery.sh@100 -- # sleep 1 00:29:24.656 [2024-06-09 23:11:52.764779] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:24.656 [2024-06-09 23:11:52.764802] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:24.656 [2024-06-09 23:11:52.764817] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:24.916 [2024-06-09 23:11:52.894235] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:24.916 [2024-06-09 23:11:52.955944] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:24.916 [2024-06-09 23:11:52.955967] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:25.176 23:11:53 -- host/discovery.sh@101 -- # get_subsystem_names 00:29:25.176 23:11:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:25.176 23:11:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:25.176 23:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.176 23:11:53 -- host/discovery.sh@59 -- # sort 00:29:25.176 23:11:53 -- common/autotest_common.sh@10 -- # set +x 00:29:25.176 23:11:53 -- host/discovery.sh@59 -- # xargs 00:29:25.176 23:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.176 23:11:53 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:25.176 23:11:53 -- host/discovery.sh@102 -- # get_bdev_list 00:29:25.176 23:11:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:25.176 23:11:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:25.176 23:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.176 23:11:53 -- host/discovery.sh@55 -- # sort 00:29:25.176 23:11:53 -- common/autotest_common.sh@10 -- # set +x 00:29:25.176 23:11:53 -- host/discovery.sh@55 -- # xargs 00:29:25.176 23:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.176 23:11:53 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:25.176 23:11:53 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:29:25.176 23:11:53 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:25.176 23:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.176 23:11:53 -- common/autotest_common.sh@10 -- # set +x 00:29:25.176 23:11:53 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:25.176 23:11:53 -- host/discovery.sh@63 -- # sort -n 00:29:25.176 23:11:53 -- host/discovery.sh@63 -- # xargs 00:29:25.176 23:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.436 23:11:53 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:29:25.436 23:11:53 -- host/discovery.sh@104 -- # get_notification_count 00:29:25.436 23:11:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:25.436 23:11:53 -- host/discovery.sh@74 -- # jq '. | length' 00:29:25.436 23:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.436 23:11:53 -- common/autotest_common.sh@10 -- # set +x 00:29:25.436 23:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.436 23:11:53 -- host/discovery.sh@74 -- # notification_count=1 00:29:25.436 23:11:53 -- host/discovery.sh@75 -- # notify_id=1 00:29:25.436 23:11:53 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:29:25.436 23:11:53 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:25.436 23:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.436 23:11:53 -- common/autotest_common.sh@10 -- # set +x 00:29:25.436 23:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.436 23:11:53 -- host/discovery.sh@109 -- # sleep 1 00:29:26.383 23:11:54 -- host/discovery.sh@110 -- # get_bdev_list 00:29:26.383 23:11:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:26.383 23:11:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:26.383 23:11:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:26.383 23:11:54 -- host/discovery.sh@55 -- # sort 00:29:26.383 23:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:26.383 23:11:54 -- host/discovery.sh@55 -- # xargs 00:29:26.383 23:11:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:26.383 23:11:54 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:26.383 23:11:54 -- host/discovery.sh@111 -- # get_notification_count 00:29:26.383 23:11:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:26.383 23:11:54 -- host/discovery.sh@74 -- # jq '. | length' 00:29:26.383 23:11:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:26.383 23:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:26.383 23:11:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:26.383 23:11:54 -- host/discovery.sh@74 -- # notification_count=1 00:29:26.383 23:11:54 -- host/discovery.sh@75 -- # notify_id=2 00:29:26.383 23:11:54 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:29:26.383 23:11:54 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:26.383 23:11:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:26.383 23:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:26.383 [2024-06-09 23:11:54.553947] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:26.383 [2024-06-09 23:11:54.554943] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:26.383 [2024-06-09 23:11:54.554969] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:26.383 23:11:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:26.383 23:11:54 -- host/discovery.sh@117 -- # sleep 1 00:29:26.644 [2024-06-09 23:11:54.684378] bdev_nvme.c:6677:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:26.904 [2024-06-09 23:11:54.956726] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:26.904 [2024-06-09 23:11:54.956743] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:26.904 [2024-06-09 23:11:54.956748] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:27.476 23:11:55 -- host/discovery.sh@118 -- # get_subsystem_names 00:29:27.476 23:11:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:27.476 23:11:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:27.476 23:11:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:27.476 23:11:55 -- host/discovery.sh@59 -- # sort 00:29:27.476 23:11:55 -- common/autotest_common.sh@10 -- # set +x 00:29:27.476 23:11:55 -- host/discovery.sh@59 -- # xargs 00:29:27.476 23:11:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:27.476 23:11:55 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:27.476 23:11:55 -- host/discovery.sh@119 -- # get_bdev_list 00:29:27.476 23:11:55 -- host/discovery.sh@55 -- # xargs 00:29:27.476 23:11:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:27.476 23:11:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:27.476 23:11:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:27.476 23:11:55 -- host/discovery.sh@55 -- # sort 00:29:27.476 23:11:55 -- common/autotest_common.sh@10 -- # set +x 00:29:27.737 23:11:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:27.737 23:11:55 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:27.737 23:11:55 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:29:27.737 23:11:55 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:27.737 23:11:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:27.737 23:11:55 -- common/autotest_common.sh@10 -- # set +x 00:29:27.737 23:11:55 -- host/discovery.sh@63 -- # xargs 00:29:27.737 23:11:55 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:27.737 23:11:55 -- host/discovery.sh@63 -- # sort -n 00:29:27.737 23:11:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:27.737 23:11:55 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:27.737 23:11:55 -- host/discovery.sh@121 -- # get_notification_count 00:29:27.737 23:11:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:27.737 23:11:55 -- host/discovery.sh@74 -- # jq '. | length' 00:29:27.737 23:11:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:27.737 23:11:55 -- common/autotest_common.sh@10 -- # set +x 00:29:27.737 23:11:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:27.737 23:11:55 -- host/discovery.sh@74 -- # notification_count=0 00:29:27.737 23:11:55 -- host/discovery.sh@75 -- # notify_id=2 00:29:27.737 23:11:55 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:29:27.737 23:11:55 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:27.737 23:11:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:27.737 23:11:55 -- common/autotest_common.sh@10 -- # set +x 00:29:27.737 [2024-06-09 23:11:55.777993] bdev_nvme.c:6735:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:27.737 [2024-06-09 23:11:55.778014] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:27.737 [2024-06-09 23:11:55.780959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.737 [2024-06-09 23:11:55.780978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.737 [2024-06-09 23:11:55.780986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.737 [2024-06-09 23:11:55.780994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.737 [2024-06-09 23:11:55.781002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.737 [2024-06-09 23:11:55.781010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.737 [2024-06-09 23:11:55.781018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:27.737 [2024-06-09 23:11:55.781025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:27.737 [2024-06-09 23:11:55.781032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3af10 is same with the state(5) to be set 00:29:27.737 23:11:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:27.737 23:11:55 -- host/discovery.sh@127 -- # sleep 1 00:29:27.737 [2024-06-09 23:11:55.790972] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3af10 (9): Bad file descriptor 00:29:27.737 [2024-06-09 23:11:55.801011] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:27.737 [2024-06-09 23:11:55.801675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-06-09 23:11:55.802230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-06-09 23:11:55.802244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3af10 with addr=10.0.0.2, port=4420 00:29:27.737 [2024-06-09 23:11:55.802254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3af10 is same with the state(5) to be set 00:29:27.737 [2024-06-09 23:11:55.802272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3af10 (9): Bad file descriptor 00:29:27.737 [2024-06-09 23:11:55.802307] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:27.737 [2024-06-09 23:11:55.802316] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:27.737 [2024-06-09 23:11:55.802324] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:27.737 [2024-06-09 23:11:55.802339] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.737 [2024-06-09 23:11:55.811064] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:27.737 [2024-06-09 23:11:55.811674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-06-09 23:11:55.812208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.737 [2024-06-09 23:11:55.812222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3af10 with addr=10.0.0.2, port=4420 00:29:27.737 [2024-06-09 23:11:55.812231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3af10 is same with the state(5) to be set 00:29:27.737 [2024-06-09 23:11:55.812249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3af10 (9): Bad file descriptor 00:29:27.737 [2024-06-09 23:11:55.812274] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:27.737 [2024-06-09 23:11:55.812282] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:27.737 [2024-06-09 23:11:55.812289] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:27.737 [2024-06-09 23:11:55.812304] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.738 [2024-06-09 23:11:55.821117] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:27.738 [2024-06-09 23:11:55.821722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-06-09 23:11:55.822260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-06-09 23:11:55.822274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3af10 with addr=10.0.0.2, port=4420 00:29:27.738 [2024-06-09 23:11:55.822283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3af10 is same with the state(5) to be set 00:29:27.738 [2024-06-09 23:11:55.822301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3af10 (9): Bad file descriptor 00:29:27.738 [2024-06-09 23:11:55.822359] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:27.738 [2024-06-09 23:11:55.822370] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:27.738 [2024-06-09 23:11:55.822378] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:27.738 [2024-06-09 23:11:55.822413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.738 [2024-06-09 23:11:55.831175] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:27.738 [2024-06-09 23:11:55.831633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-06-09 23:11:55.832012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-06-09 23:11:55.832026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3af10 with addr=10.0.0.2, port=4420 00:29:27.738 [2024-06-09 23:11:55.832035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3af10 is same with the state(5) to be set 00:29:27.738 [2024-06-09 23:11:55.832054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3af10 (9): Bad file descriptor 00:29:27.738 [2024-06-09 23:11:55.832080] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:27.738 [2024-06-09 23:11:55.832088] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:27.738 [2024-06-09 23:11:55.832096] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:27.738 [2024-06-09 23:11:55.832111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.738 [2024-06-09 23:11:55.841236] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:27.738 [2024-06-09 23:11:55.841622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-06-09 23:11:55.842181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-06-09 23:11:55.842195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3af10 with addr=10.0.0.2, port=4420 00:29:27.738 [2024-06-09 23:11:55.842205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3af10 is same with the state(5) to be set 00:29:27.738 [2024-06-09 23:11:55.842222] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3af10 (9): Bad file descriptor 00:29:27.738 [2024-06-09 23:11:55.842250] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:27.738 [2024-06-09 23:11:55.842258] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:27.738 [2024-06-09 23:11:55.842266] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:27.738 [2024-06-09 23:11:55.842281] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.738 [2024-06-09 23:11:55.851291] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:27.738 [2024-06-09 23:11:55.851671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-06-09 23:11:55.851983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-06-09 23:11:55.851993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3af10 with addr=10.0.0.2, port=4420 00:29:27.738 [2024-06-09 23:11:55.852001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3af10 is same with the state(5) to be set 00:29:27.738 [2024-06-09 23:11:55.852013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3af10 (9): Bad file descriptor 00:29:27.738 [2024-06-09 23:11:55.852023] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:27.738 [2024-06-09 23:11:55.852030] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:27.738 [2024-06-09 23:11:55.852037] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:27.738 [2024-06-09 23:11:55.852047] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.738 [2024-06-09 23:11:55.861345] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:27.738 [2024-06-09 23:11:55.861876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-06-09 23:11:55.862183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.738 [2024-06-09 23:11:55.862194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3af10 with addr=10.0.0.2, port=4420 00:29:27.738 [2024-06-09 23:11:55.862201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3af10 is same with the state(5) to be set 00:29:27.738 [2024-06-09 23:11:55.862212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3af10 (9): Bad file descriptor 00:29:27.738 [2024-06-09 23:11:55.862222] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:27.738 [2024-06-09 23:11:55.862228] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:27.738 [2024-06-09 23:11:55.862235] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:27.738 [2024-06-09 23:11:55.862245] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.738 [2024-06-09 23:11:55.868326] bdev_nvme.c:6540:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:27.738 [2024-06-09 23:11:55.868344] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:28.679 23:11:56 -- host/discovery.sh@128 -- # get_subsystem_names 00:29:28.679 23:11:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:28.679 23:11:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:28.679 23:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:28.679 23:11:56 -- host/discovery.sh@59 -- # sort 00:29:28.679 23:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:28.679 23:11:56 -- host/discovery.sh@59 -- # xargs 00:29:28.679 23:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:28.679 23:11:56 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:28.679 23:11:56 -- host/discovery.sh@129 -- # get_bdev_list 00:29:28.679 23:11:56 -- host/discovery.sh@55 -- # xargs 00:29:28.679 23:11:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:28.679 23:11:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:28.679 23:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:28.679 23:11:56 -- host/discovery.sh@55 -- # sort 00:29:28.679 23:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:28.940 23:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:28.940 23:11:56 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:28.940 23:11:56 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:29:28.940 23:11:56 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:28.941 23:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:28.941 23:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:28.941 23:11:56 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:28.941 23:11:56 -- host/discovery.sh@63 -- # sort -n 00:29:28.941 23:11:56 -- host/discovery.sh@63 -- # xargs 00:29:28.941 23:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:28.941 23:11:56 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:29:28.941 23:11:56 -- host/discovery.sh@131 -- # get_notification_count 00:29:28.941 23:11:56 -- host/discovery.sh@74 -- # jq '. | length' 00:29:28.941 23:11:56 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:28.941 23:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:28.941 23:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:28.941 23:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:28.941 23:11:56 -- host/discovery.sh@74 -- # notification_count=0 00:29:28.941 23:11:56 -- host/discovery.sh@75 -- # notify_id=2 00:29:28.941 23:11:56 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:29:28.941 23:11:56 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:28.941 23:11:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:28.941 23:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:28.941 23:11:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:28.941 23:11:56 -- host/discovery.sh@135 -- # sleep 1 00:29:29.883 23:11:57 -- host/discovery.sh@136 -- # get_subsystem_names 00:29:29.883 23:11:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:29.883 23:11:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:29.883 23:11:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.883 23:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:29.883 23:11:58 -- host/discovery.sh@59 -- # sort 00:29:29.883 23:11:58 -- host/discovery.sh@59 -- # xargs 00:29:29.883 23:11:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.883 23:11:58 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:29:29.883 23:11:58 -- host/discovery.sh@137 -- # get_bdev_list 00:29:29.883 23:11:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:29.883 23:11:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.883 23:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:29.883 23:11:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:29.883 23:11:58 -- host/discovery.sh@55 -- # sort 00:29:29.883 23:11:58 -- host/discovery.sh@55 -- # xargs 00:29:30.143 23:11:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:30.143 23:11:58 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:29:30.144 23:11:58 -- host/discovery.sh@138 -- # get_notification_count 00:29:30.144 23:11:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:30.144 23:11:58 -- host/discovery.sh@74 -- # jq '. | length' 00:29:30.144 23:11:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:30.144 23:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:30.144 23:11:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:30.144 23:11:58 -- host/discovery.sh@74 -- # notification_count=2 00:29:30.144 23:11:58 -- host/discovery.sh@75 -- # notify_id=4 00:29:30.144 23:11:58 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:29:30.144 23:11:58 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:30.144 23:11:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:30.144 23:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:31.086 [2024-06-09 23:11:59.210875] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:31.086 [2024-06-09 23:11:59.210893] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:31.086 [2024-06-09 23:11:59.210907] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:31.347 [2024-06-09 23:11:59.300174] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:31.607 [2024-06-09 23:11:59.611085] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:31.607 [2024-06-09 23:11:59.611115] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:31.607 23:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:31.607 23:11:59 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:31.607 23:11:59 -- common/autotest_common.sh@640 -- # local es=0 00:29:31.607 23:11:59 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:31.607 23:11:59 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:31.607 23:11:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:31.607 23:11:59 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:31.607 23:11:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:31.607 23:11:59 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:31.607 23:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:31.607 23:11:59 -- common/autotest_common.sh@10 -- # set +x 00:29:31.607 request: 00:29:31.607 { 00:29:31.607 "name": "nvme", 00:29:31.607 "trtype": "tcp", 00:29:31.607 "traddr": "10.0.0.2", 00:29:31.607 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:31.607 "adrfam": "ipv4", 00:29:31.607 "trsvcid": "8009", 00:29:31.607 "wait_for_attach": true, 00:29:31.607 "method": "bdev_nvme_start_discovery", 00:29:31.607 "req_id": 1 00:29:31.607 } 00:29:31.607 Got JSON-RPC error response 00:29:31.607 response: 00:29:31.607 { 00:29:31.607 "code": -17, 00:29:31.607 "message": "File exists" 00:29:31.607 } 00:29:31.607 23:11:59 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:31.607 23:11:59 -- common/autotest_common.sh@643 -- # es=1 00:29:31.607 23:11:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:31.607 23:11:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:31.607 23:11:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:31.607 23:11:59 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:29:31.607 23:11:59 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:31.607 23:11:59 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:31.608 23:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:31.608 23:11:59 -- host/discovery.sh@67 -- # sort 00:29:31.608 23:11:59 -- common/autotest_common.sh@10 -- # set +x 00:29:31.608 23:11:59 -- host/discovery.sh@67 -- # xargs 00:29:31.608 23:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:31.608 23:11:59 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:29:31.608 23:11:59 -- host/discovery.sh@147 -- # get_bdev_list 00:29:31.608 23:11:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:31.608 23:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:31.608 23:11:59 -- common/autotest_common.sh@10 -- # set +x 00:29:31.608 23:11:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:31.608 23:11:59 -- host/discovery.sh@55 -- # sort 00:29:31.608 23:11:59 -- host/discovery.sh@55 -- # xargs 00:29:31.608 23:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:31.608 23:11:59 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:31.608 23:11:59 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:31.608 23:11:59 -- common/autotest_common.sh@640 -- # local es=0 00:29:31.608 23:11:59 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:31.608 23:11:59 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:31.608 23:11:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:31.608 23:11:59 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:31.608 23:11:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:31.608 23:11:59 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:31.608 23:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:31.608 23:11:59 -- common/autotest_common.sh@10 -- # set +x 00:29:31.608 request: 00:29:31.608 { 00:29:31.608 "name": "nvme_second", 00:29:31.608 "trtype": "tcp", 00:29:31.608 "traddr": "10.0.0.2", 00:29:31.608 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:31.608 "adrfam": "ipv4", 00:29:31.608 "trsvcid": "8009", 00:29:31.608 "wait_for_attach": true, 00:29:31.608 "method": "bdev_nvme_start_discovery", 00:29:31.608 "req_id": 1 00:29:31.608 } 00:29:31.608 Got JSON-RPC error response 00:29:31.608 response: 00:29:31.608 { 00:29:31.608 "code": -17, 00:29:31.608 "message": "File exists" 00:29:31.608 } 00:29:31.608 23:11:59 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:31.608 23:11:59 -- common/autotest_common.sh@643 -- # es=1 00:29:31.608 23:11:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:31.608 23:11:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:31.608 23:11:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:31.608 23:11:59 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:29:31.608 23:11:59 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:31.608 23:11:59 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:31.608 23:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:31.608 23:11:59 -- host/discovery.sh@67 -- # sort 00:29:31.608 23:11:59 -- common/autotest_common.sh@10 -- # set +x 00:29:31.608 23:11:59 -- host/discovery.sh@67 -- # xargs 00:29:31.608 23:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:31.868 23:11:59 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:29:31.868 23:11:59 -- host/discovery.sh@153 -- # get_bdev_list 00:29:31.868 23:11:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:31.868 23:11:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:31.868 23:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:31.868 23:11:59 -- host/discovery.sh@55 -- # sort 00:29:31.868 23:11:59 -- common/autotest_common.sh@10 -- # set +x 00:29:31.868 23:11:59 -- host/discovery.sh@55 -- # xargs 00:29:31.868 23:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:31.868 23:11:59 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:31.868 23:11:59 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:31.868 23:11:59 -- common/autotest_common.sh@640 -- # local es=0 00:29:31.868 23:11:59 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:31.868 23:11:59 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:31.868 23:11:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:31.868 23:11:59 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:31.868 23:11:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:31.868 23:11:59 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:31.868 23:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:31.868 23:11:59 -- common/autotest_common.sh@10 -- # set +x 00:29:32.810 [2024-06-09 23:12:00.882796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.810 [2024-06-09 23:12:00.883315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.810 [2024-06-09 23:12:00.883329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb306c0 with addr=10.0.0.2, port=8010 00:29:32.810 [2024-06-09 23:12:00.883342] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:32.810 [2024-06-09 23:12:00.883349] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:32.810 [2024-06-09 23:12:00.883357] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:33.753 [2024-06-09 23:12:01.885139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.753 [2024-06-09 23:12:01.885540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.753 [2024-06-09 23:12:01.885552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb306c0 with addr=10.0.0.2, port=8010 00:29:33.753 [2024-06-09 23:12:01.885564] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:33.753 [2024-06-09 23:12:01.885572] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:33.753 [2024-06-09 23:12:01.885580] bdev_nvme.c:6815:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:35.137 [2024-06-09 23:12:02.886958] bdev_nvme.c:6796:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:35.137 request: 00:29:35.137 { 00:29:35.137 "name": "nvme_second", 00:29:35.137 "trtype": "tcp", 00:29:35.137 "traddr": "10.0.0.2", 00:29:35.137 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:35.137 "adrfam": "ipv4", 00:29:35.137 "trsvcid": "8010", 00:29:35.137 "attach_timeout_ms": 3000, 00:29:35.137 "method": "bdev_nvme_start_discovery", 00:29:35.137 "req_id": 1 00:29:35.137 } 00:29:35.137 Got JSON-RPC error response 00:29:35.137 response: 00:29:35.137 { 00:29:35.137 "code": -110, 00:29:35.137 "message": "Connection timed out" 00:29:35.137 } 00:29:35.137 23:12:02 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:35.137 23:12:02 -- common/autotest_common.sh@643 -- # es=1 00:29:35.137 23:12:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:35.137 23:12:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:35.137 23:12:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:35.137 23:12:02 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:29:35.137 23:12:02 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:35.137 23:12:02 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:35.137 23:12:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.137 23:12:02 -- host/discovery.sh@67 -- # sort 00:29:35.137 23:12:02 -- common/autotest_common.sh@10 -- # set +x 00:29:35.137 23:12:02 -- host/discovery.sh@67 -- # xargs 00:29:35.137 23:12:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.137 23:12:02 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:29:35.137 23:12:02 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:29:35.137 23:12:02 -- host/discovery.sh@162 -- # kill 78067 00:29:35.137 23:12:02 -- host/discovery.sh@163 -- # nvmftestfini 00:29:35.137 23:12:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:35.137 23:12:02 -- nvmf/common.sh@116 -- # sync 00:29:35.137 23:12:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:35.137 23:12:02 -- nvmf/common.sh@119 -- # set +e 00:29:35.137 23:12:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:35.137 23:12:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:35.137 rmmod nvme_tcp 00:29:35.137 rmmod nvme_fabrics 00:29:35.137 rmmod nvme_keyring 00:29:35.137 23:12:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:35.137 23:12:03 -- nvmf/common.sh@123 -- # set -e 00:29:35.137 23:12:03 -- nvmf/common.sh@124 -- # return 0 00:29:35.137 23:12:03 -- nvmf/common.sh@477 -- # '[' -n 77833 ']' 00:29:35.137 23:12:03 -- nvmf/common.sh@478 -- # killprocess 77833 00:29:35.137 23:12:03 -- common/autotest_common.sh@926 -- # '[' -z 77833 ']' 00:29:35.137 23:12:03 -- common/autotest_common.sh@930 -- # kill -0 77833 00:29:35.137 23:12:03 -- common/autotest_common.sh@931 -- # uname 00:29:35.137 23:12:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:35.137 23:12:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77833 00:29:35.137 23:12:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:35.137 23:12:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:35.137 23:12:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77833' 00:29:35.137 killing process with pid 77833 00:29:35.137 23:12:03 -- common/autotest_common.sh@945 -- # kill 77833 00:29:35.137 23:12:03 -- common/autotest_common.sh@950 -- # wait 77833 00:29:35.138 23:12:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:35.138 23:12:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:35.138 23:12:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:35.138 23:12:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:35.138 23:12:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:35.138 23:12:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:35.138 23:12:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:35.138 23:12:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.685 23:12:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:37.685 00:29:37.685 real 0m22.579s 00:29:37.685 user 0m28.816s 00:29:37.685 sys 0m6.667s 00:29:37.685 23:12:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:37.685 23:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:37.685 ************************************ 00:29:37.685 END TEST nvmf_discovery 00:29:37.685 ************************************ 00:29:37.685 23:12:05 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:37.685 23:12:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:37.685 23:12:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:37.685 23:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:37.685 ************************************ 00:29:37.685 START TEST nvmf_discovery_remove_ifc 00:29:37.685 ************************************ 00:29:37.685 23:12:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:37.685 * Looking for test storage... 00:29:37.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:37.685 23:12:05 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:37.685 23:12:05 -- nvmf/common.sh@7 -- # uname -s 00:29:37.685 23:12:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:37.685 23:12:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:37.685 23:12:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:37.685 23:12:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:37.685 23:12:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:37.685 23:12:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:37.685 23:12:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:37.685 23:12:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:37.685 23:12:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:37.685 23:12:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:37.685 23:12:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:37.685 23:12:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:37.685 23:12:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:37.685 23:12:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:37.685 23:12:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:37.685 23:12:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:37.685 23:12:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.685 23:12:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.685 23:12:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.685 23:12:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.685 23:12:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.685 23:12:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.685 23:12:05 -- paths/export.sh@5 -- # export PATH 00:29:37.685 23:12:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.685 23:12:05 -- nvmf/common.sh@46 -- # : 0 00:29:37.685 23:12:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:37.685 23:12:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:37.685 23:12:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:37.685 23:12:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:37.685 23:12:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:37.685 23:12:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:37.685 23:12:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:37.685 23:12:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:37.685 23:12:05 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:37.685 23:12:05 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:37.685 23:12:05 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:37.685 23:12:05 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:37.685 23:12:05 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:37.685 23:12:05 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:37.685 23:12:05 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:37.685 23:12:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:37.685 23:12:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:37.685 23:12:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:37.685 23:12:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:37.685 23:12:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:37.685 23:12:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.685 23:12:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:37.685 23:12:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.685 23:12:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:37.685 23:12:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:37.685 23:12:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:37.685 23:12:05 -- common/autotest_common.sh@10 -- # set +x 00:29:44.278 23:12:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:44.278 23:12:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:44.278 23:12:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:44.278 23:12:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:44.278 23:12:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:44.278 23:12:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:44.278 23:12:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:44.278 23:12:12 -- nvmf/common.sh@294 -- # net_devs=() 00:29:44.278 23:12:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:44.278 23:12:12 -- nvmf/common.sh@295 -- # e810=() 00:29:44.278 23:12:12 -- nvmf/common.sh@295 -- # local -ga e810 00:29:44.278 23:12:12 -- nvmf/common.sh@296 -- # x722=() 00:29:44.278 23:12:12 -- nvmf/common.sh@296 -- # local -ga x722 00:29:44.278 23:12:12 -- nvmf/common.sh@297 -- # mlx=() 00:29:44.278 23:12:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:44.278 23:12:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:44.278 23:12:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:44.278 23:12:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:44.278 23:12:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:44.278 23:12:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:44.278 23:12:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:44.278 23:12:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:44.278 23:12:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:44.278 23:12:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:44.278 23:12:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:44.278 23:12:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:44.278 23:12:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:44.278 23:12:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:44.278 23:12:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:44.278 23:12:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:44.278 23:12:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:44.278 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:44.278 23:12:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:44.278 23:12:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:44.278 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:44.278 23:12:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:44.278 23:12:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:44.278 23:12:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.278 23:12:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:44.278 23:12:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.278 23:12:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:44.278 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:44.278 23:12:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.278 23:12:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:44.278 23:12:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:44.278 23:12:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:44.278 23:12:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:44.278 23:12:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:44.278 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:44.278 23:12:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:44.278 23:12:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:44.278 23:12:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:44.278 23:12:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:44.278 23:12:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:44.278 23:12:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:44.278 23:12:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:44.278 23:12:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:44.278 23:12:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:44.278 23:12:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:44.278 23:12:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:44.278 23:12:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:44.278 23:12:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:44.278 23:12:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:44.278 23:12:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:44.278 23:12:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:44.278 23:12:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:44.278 23:12:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:44.278 23:12:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:44.278 23:12:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:44.278 23:12:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:44.278 23:12:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:44.539 23:12:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:44.539 23:12:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:44.539 23:12:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:44.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:44.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:29:44.539 00:29:44.539 --- 10.0.0.2 ping statistics --- 00:29:44.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.539 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:29:44.539 23:12:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:44.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:44.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.418 ms 00:29:44.539 00:29:44.539 --- 10.0.0.1 ping statistics --- 00:29:44.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:44.539 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:29:44.539 23:12:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:44.539 23:12:12 -- nvmf/common.sh@410 -- # return 0 00:29:44.539 23:12:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:44.539 23:12:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:44.539 23:12:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:44.539 23:12:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:44.539 23:12:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:44.539 23:12:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:44.539 23:12:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:44.539 23:12:12 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:44.539 23:12:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:44.539 23:12:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:44.539 23:12:12 -- common/autotest_common.sh@10 -- # set +x 00:29:44.539 23:12:12 -- nvmf/common.sh@469 -- # nvmfpid=85291 00:29:44.539 23:12:12 -- nvmf/common.sh@470 -- # waitforlisten 85291 00:29:44.539 23:12:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:44.539 23:12:12 -- common/autotest_common.sh@819 -- # '[' -z 85291 ']' 00:29:44.539 23:12:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.539 23:12:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:44.539 23:12:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.539 23:12:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:44.539 23:12:12 -- common/autotest_common.sh@10 -- # set +x 00:29:44.539 [2024-06-09 23:12:12.628427] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:44.539 [2024-06-09 23:12:12.628477] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.539 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.539 [2024-06-09 23:12:12.693174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.799 [2024-06-09 23:12:12.755085] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:44.799 [2024-06-09 23:12:12.755204] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.799 [2024-06-09 23:12:12.755213] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.799 [2024-06-09 23:12:12.755220] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.799 [2024-06-09 23:12:12.755239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.369 23:12:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:45.369 23:12:13 -- common/autotest_common.sh@852 -- # return 0 00:29:45.369 23:12:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:45.369 23:12:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:45.369 23:12:13 -- common/autotest_common.sh@10 -- # set +x 00:29:45.369 23:12:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.369 23:12:13 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:45.369 23:12:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.369 23:12:13 -- common/autotest_common.sh@10 -- # set +x 00:29:45.369 [2024-06-09 23:12:13.465454] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.369 [2024-06-09 23:12:13.473588] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:45.369 null0 00:29:45.369 [2024-06-09 23:12:13.505608] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:45.369 23:12:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.369 23:12:13 -- host/discovery_remove_ifc.sh@59 -- # hostpid=85429 00:29:45.369 23:12:13 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 85429 /tmp/host.sock 00:29:45.369 23:12:13 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:45.369 23:12:13 -- common/autotest_common.sh@819 -- # '[' -z 85429 ']' 00:29:45.369 23:12:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:29:45.369 23:12:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:45.369 23:12:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:45.369 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:45.369 23:12:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:45.369 23:12:13 -- common/autotest_common.sh@10 -- # set +x 00:29:45.630 [2024-06-09 23:12:13.580527] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:45.630 [2024-06-09 23:12:13.580619] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85429 ] 00:29:45.630 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.630 [2024-06-09 23:12:13.641089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.630 [2024-06-09 23:12:13.703561] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:45.630 [2024-06-09 23:12:13.703689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.202 23:12:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:46.202 23:12:14 -- common/autotest_common.sh@852 -- # return 0 00:29:46.202 23:12:14 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:46.202 23:12:14 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:46.202 23:12:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.202 23:12:14 -- common/autotest_common.sh@10 -- # set +x 00:29:46.202 23:12:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.202 23:12:14 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:46.202 23:12:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.202 23:12:14 -- common/autotest_common.sh@10 -- # set +x 00:29:46.463 23:12:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.463 23:12:14 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:46.463 23:12:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.463 23:12:14 -- common/autotest_common.sh@10 -- # set +x 00:29:47.407 [2024-06-09 23:12:15.435438] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:47.407 [2024-06-09 23:12:15.435459] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:47.407 [2024-06-09 23:12:15.435474] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:47.407 [2024-06-09 23:12:15.526763] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:47.668 [2024-06-09 23:12:15.751860] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:47.668 [2024-06-09 23:12:15.751904] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:47.668 [2024-06-09 23:12:15.751928] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:47.668 [2024-06-09 23:12:15.751942] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:47.668 [2024-06-09 23:12:15.751962] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:47.668 23:12:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.668 23:12:15 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:47.668 [2024-06-09 23:12:15.754322] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x8e54d0 was disconnected and freed. delete nvme_qpair. 00:29:47.668 23:12:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:47.668 23:12:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:47.668 23:12:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:47.668 23:12:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.668 23:12:15 -- common/autotest_common.sh@10 -- # set +x 00:29:47.668 23:12:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:47.669 23:12:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:47.669 23:12:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.669 23:12:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:47.669 23:12:15 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:47.669 23:12:15 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:47.929 23:12:15 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:47.929 23:12:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:47.929 23:12:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:47.929 23:12:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:47.929 23:12:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:47.929 23:12:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.929 23:12:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:47.929 23:12:15 -- common/autotest_common.sh@10 -- # set +x 00:29:47.929 23:12:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.929 23:12:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:47.929 23:12:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:48.911 23:12:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:48.911 23:12:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:48.911 23:12:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:48.911 23:12:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:48.911 23:12:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.911 23:12:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:48.911 23:12:17 -- common/autotest_common.sh@10 -- # set +x 00:29:48.911 23:12:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.911 23:12:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:48.911 23:12:17 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:50.294 23:12:18 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:50.294 23:12:18 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:50.294 23:12:18 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:50.294 23:12:18 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:50.294 23:12:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:50.294 23:12:18 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:50.294 23:12:18 -- common/autotest_common.sh@10 -- # set +x 00:29:50.294 23:12:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:50.294 23:12:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:50.294 23:12:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:51.234 23:12:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:51.234 23:12:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:51.234 23:12:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:51.234 23:12:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:51.234 23:12:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:51.234 23:12:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.234 23:12:19 -- common/autotest_common.sh@10 -- # set +x 00:29:51.234 23:12:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.234 23:12:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:51.234 23:12:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:52.176 23:12:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:52.176 23:12:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:52.176 23:12:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:52.176 23:12:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:52.176 23:12:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.176 23:12:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:52.176 23:12:20 -- common/autotest_common.sh@10 -- # set +x 00:29:52.176 23:12:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:52.176 23:12:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:52.176 23:12:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:53.117 [2024-06-09 23:12:21.192381] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:53.117 [2024-06-09 23:12:21.192431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.117 [2024-06-09 23:12:21.192443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.117 [2024-06-09 23:12:21.192452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.117 [2024-06-09 23:12:21.192460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.117 [2024-06-09 23:12:21.192468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.117 [2024-06-09 23:12:21.192476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.117 [2024-06-09 23:12:21.192483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.117 [2024-06-09 23:12:21.192491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.117 [2024-06-09 23:12:21.192499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.118 [2024-06-09 23:12:21.192511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.118 [2024-06-09 23:12:21.192518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8abb40 is same with the state(5) to be set 00:29:53.118 [2024-06-09 23:12:21.202405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8abb40 (9): Bad file descriptor 00:29:53.118 [2024-06-09 23:12:21.212443] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:53.118 23:12:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:53.118 23:12:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:53.118 23:12:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:53.118 23:12:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:53.118 23:12:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:53.118 23:12:21 -- common/autotest_common.sh@10 -- # set +x 00:29:53.118 23:12:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:54.503 [2024-06-09 23:12:22.258426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:55.444 [2024-06-09 23:12:23.282442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:55.444 [2024-06-09 23:12:23.282482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8abb40 with addr=10.0.0.2, port=4420 00:29:55.444 [2024-06-09 23:12:23.282493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8abb40 is same with the state(5) to be set 00:29:55.444 [2024-06-09 23:12:23.282833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8abb40 (9): Bad file descriptor 00:29:55.444 [2024-06-09 23:12:23.282855] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.444 [2024-06-09 23:12:23.282876] bdev_nvme.c:6504:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:55.444 [2024-06-09 23:12:23.282898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.444 [2024-06-09 23:12:23.282909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.444 [2024-06-09 23:12:23.282920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.444 [2024-06-09 23:12:23.282928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.444 [2024-06-09 23:12:23.282936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.444 [2024-06-09 23:12:23.282944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.444 [2024-06-09 23:12:23.282952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.444 [2024-06-09 23:12:23.282959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.444 [2024-06-09 23:12:23.282968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:55.444 [2024-06-09 23:12:23.282975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.444 [2024-06-09 23:12:23.282983] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:55.444 [2024-06-09 23:12:23.283515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8abf50 (9): Bad file descriptor 00:29:55.444 [2024-06-09 23:12:23.284527] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:55.444 [2024-06-09 23:12:23.284539] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:29:55.444 23:12:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:55.444 23:12:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:55.444 23:12:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:56.385 23:12:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:56.385 23:12:24 -- common/autotest_common.sh@10 -- # set +x 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:56.385 23:12:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:56.385 23:12:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.385 23:12:24 -- common/autotest_common.sh@10 -- # set +x 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:56.385 23:12:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:56.385 23:12:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:57.326 [2024-06-09 23:12:25.341834] bdev_nvme.c:6753:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:57.326 [2024-06-09 23:12:25.341855] bdev_nvme.c:6833:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:57.326 [2024-06-09 23:12:25.341870] bdev_nvme.c:6716:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:57.326 [2024-06-09 23:12:25.471287] bdev_nvme.c:6682:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:57.326 23:12:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:57.326 23:12:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:57.326 23:12:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:57.326 23:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.326 23:12:25 -- common/autotest_common.sh@10 -- # set +x 00:29:57.326 23:12:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:57.326 23:12:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:57.586 23:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.586 23:12:25 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:57.586 23:12:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:57.586 [2024-06-09 23:12:25.532471] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:57.586 [2024-06-09 23:12:25.532511] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:57.586 [2024-06-09 23:12:25.532531] bdev_nvme.c:7542:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:57.586 [2024-06-09 23:12:25.532544] bdev_nvme.c:6572:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:57.586 [2024-06-09 23:12:25.532552] bdev_nvme.c:6531:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:57.586 [2024-06-09 23:12:25.539605] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x8b9780 was disconnected and freed. delete nvme_qpair. 00:29:58.528 23:12:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:58.528 23:12:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:58.528 23:12:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:58.528 23:12:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:58.528 23:12:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:58.528 23:12:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:58.528 23:12:26 -- common/autotest_common.sh@10 -- # set +x 00:29:58.528 23:12:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:58.528 23:12:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:58.528 23:12:26 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:58.528 23:12:26 -- host/discovery_remove_ifc.sh@90 -- # killprocess 85429 00:29:58.528 23:12:26 -- common/autotest_common.sh@926 -- # '[' -z 85429 ']' 00:29:58.528 23:12:26 -- common/autotest_common.sh@930 -- # kill -0 85429 00:29:58.528 23:12:26 -- common/autotest_common.sh@931 -- # uname 00:29:58.528 23:12:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:58.528 23:12:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85429 00:29:58.528 23:12:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:58.528 23:12:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:58.528 23:12:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85429' 00:29:58.528 killing process with pid 85429 00:29:58.528 23:12:26 -- common/autotest_common.sh@945 -- # kill 85429 00:29:58.528 23:12:26 -- common/autotest_common.sh@950 -- # wait 85429 00:29:58.789 23:12:26 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:58.789 23:12:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:58.789 23:12:26 -- nvmf/common.sh@116 -- # sync 00:29:58.789 23:12:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:58.789 23:12:26 -- nvmf/common.sh@119 -- # set +e 00:29:58.789 23:12:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:58.789 23:12:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:58.789 rmmod nvme_tcp 00:29:58.789 rmmod nvme_fabrics 00:29:58.789 rmmod nvme_keyring 00:29:58.789 23:12:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:58.789 23:12:26 -- nvmf/common.sh@123 -- # set -e 00:29:58.789 23:12:26 -- nvmf/common.sh@124 -- # return 0 00:29:58.789 23:12:26 -- nvmf/common.sh@477 -- # '[' -n 85291 ']' 00:29:58.789 23:12:26 -- nvmf/common.sh@478 -- # killprocess 85291 00:29:58.789 23:12:26 -- common/autotest_common.sh@926 -- # '[' -z 85291 ']' 00:29:58.789 23:12:26 -- common/autotest_common.sh@930 -- # kill -0 85291 00:29:58.789 23:12:26 -- common/autotest_common.sh@931 -- # uname 00:29:58.789 23:12:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:58.789 23:12:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85291 00:29:58.789 23:12:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:58.789 23:12:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:58.789 23:12:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85291' 00:29:58.789 killing process with pid 85291 00:29:58.789 23:12:26 -- common/autotest_common.sh@945 -- # kill 85291 00:29:58.789 23:12:26 -- common/autotest_common.sh@950 -- # wait 85291 00:29:59.051 23:12:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:59.051 23:12:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:59.051 23:12:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:59.051 23:12:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:59.051 23:12:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:59.051 23:12:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.051 23:12:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:59.051 23:12:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.968 23:12:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:00.968 00:30:00.968 real 0m23.749s 00:30:00.968 user 0m28.075s 00:30:00.968 sys 0m6.410s 00:30:00.968 23:12:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:00.968 23:12:29 -- common/autotest_common.sh@10 -- # set +x 00:30:00.968 ************************************ 00:30:00.968 END TEST nvmf_discovery_remove_ifc 00:30:00.968 ************************************ 00:30:00.968 23:12:29 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:30:00.968 23:12:29 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:00.968 23:12:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:00.968 23:12:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:00.968 23:12:29 -- common/autotest_common.sh@10 -- # set +x 00:30:00.968 ************************************ 00:30:00.968 START TEST nvmf_digest 00:30:00.968 ************************************ 00:30:00.968 23:12:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:01.230 * Looking for test storage... 00:30:01.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:01.230 23:12:29 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.230 23:12:29 -- nvmf/common.sh@7 -- # uname -s 00:30:01.230 23:12:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.230 23:12:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.230 23:12:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.230 23:12:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.230 23:12:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.230 23:12:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.230 23:12:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.230 23:12:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.230 23:12:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.230 23:12:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.230 23:12:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:01.230 23:12:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:01.230 23:12:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.230 23:12:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.230 23:12:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.230 23:12:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.230 23:12:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.230 23:12:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.230 23:12:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.230 23:12:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.230 23:12:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.230 23:12:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.230 23:12:29 -- paths/export.sh@5 -- # export PATH 00:30:01.230 23:12:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.230 23:12:29 -- nvmf/common.sh@46 -- # : 0 00:30:01.230 23:12:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:01.230 23:12:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:01.230 23:12:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:01.230 23:12:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.231 23:12:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.231 23:12:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:01.231 23:12:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:01.231 23:12:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:01.231 23:12:29 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:01.231 23:12:29 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:01.231 23:12:29 -- host/digest.sh@16 -- # runtime=2 00:30:01.231 23:12:29 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:30:01.231 23:12:29 -- host/digest.sh@132 -- # nvmftestinit 00:30:01.231 23:12:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:01.231 23:12:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.231 23:12:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:01.231 23:12:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:01.231 23:12:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:01.231 23:12:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.231 23:12:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:01.231 23:12:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.231 23:12:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:01.231 23:12:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:01.231 23:12:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:01.231 23:12:29 -- common/autotest_common.sh@10 -- # set +x 00:30:07.822 23:12:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:07.822 23:12:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:07.822 23:12:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:07.822 23:12:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:07.822 23:12:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:07.822 23:12:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:07.822 23:12:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:07.822 23:12:35 -- nvmf/common.sh@294 -- # net_devs=() 00:30:07.822 23:12:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:07.822 23:12:35 -- nvmf/common.sh@295 -- # e810=() 00:30:07.822 23:12:35 -- nvmf/common.sh@295 -- # local -ga e810 00:30:07.822 23:12:35 -- nvmf/common.sh@296 -- # x722=() 00:30:07.822 23:12:35 -- nvmf/common.sh@296 -- # local -ga x722 00:30:07.822 23:12:35 -- nvmf/common.sh@297 -- # mlx=() 00:30:07.822 23:12:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:07.822 23:12:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.822 23:12:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.822 23:12:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.822 23:12:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.822 23:12:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.822 23:12:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.822 23:12:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.822 23:12:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.822 23:12:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.822 23:12:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.822 23:12:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.822 23:12:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:07.822 23:12:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:07.822 23:12:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:07.822 23:12:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:07.822 23:12:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:07.822 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:07.822 23:12:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:07.822 23:12:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:07.822 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:07.822 23:12:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:07.822 23:12:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:07.822 23:12:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.822 23:12:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:07.822 23:12:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.822 23:12:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:07.822 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:07.822 23:12:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.822 23:12:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:07.822 23:12:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.822 23:12:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:07.822 23:12:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.822 23:12:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:07.822 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:07.822 23:12:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.822 23:12:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:07.822 23:12:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:07.822 23:12:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:07.822 23:12:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:07.822 23:12:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:07.822 23:12:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:07.822 23:12:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:07.822 23:12:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:07.822 23:12:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:07.822 23:12:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:07.822 23:12:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:07.822 23:12:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:07.822 23:12:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:07.822 23:12:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:07.822 23:12:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:07.822 23:12:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:07.822 23:12:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:07.822 23:12:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:07.822 23:12:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:07.822 23:12:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:07.823 23:12:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.083 23:12:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.083 23:12:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.083 23:12:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:08.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:30:08.083 00:30:08.083 --- 10.0.0.2 ping statistics --- 00:30:08.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.083 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:30:08.083 23:12:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.436 ms 00:30:08.083 00:30:08.083 --- 10.0.0.1 ping statistics --- 00:30:08.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.083 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:30:08.083 23:12:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.083 23:12:36 -- nvmf/common.sh@410 -- # return 0 00:30:08.083 23:12:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:08.083 23:12:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.083 23:12:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:08.083 23:12:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:08.083 23:12:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.083 23:12:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:08.083 23:12:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:08.083 23:12:36 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:08.083 23:12:36 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:30:08.083 23:12:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:08.083 23:12:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:08.083 23:12:36 -- common/autotest_common.sh@10 -- # set +x 00:30:08.083 ************************************ 00:30:08.083 START TEST nvmf_digest_clean 00:30:08.083 ************************************ 00:30:08.083 23:12:36 -- common/autotest_common.sh@1104 -- # run_digest 00:30:08.083 23:12:36 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:30:08.083 23:12:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:08.083 23:12:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:08.083 23:12:36 -- common/autotest_common.sh@10 -- # set +x 00:30:08.083 23:12:36 -- nvmf/common.sh@469 -- # nvmfpid=92112 00:30:08.083 23:12:36 -- nvmf/common.sh@470 -- # waitforlisten 92112 00:30:08.083 23:12:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:08.083 23:12:36 -- common/autotest_common.sh@819 -- # '[' -z 92112 ']' 00:30:08.083 23:12:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.083 23:12:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:08.083 23:12:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.083 23:12:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:08.083 23:12:36 -- common/autotest_common.sh@10 -- # set +x 00:30:08.083 [2024-06-09 23:12:36.194083] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:08.083 [2024-06-09 23:12:36.194130] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.083 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.083 [2024-06-09 23:12:36.260593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.343 [2024-06-09 23:12:36.323243] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:08.343 [2024-06-09 23:12:36.323360] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.343 [2024-06-09 23:12:36.323369] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.343 [2024-06-09 23:12:36.323375] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.343 [2024-06-09 23:12:36.323399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.913 23:12:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:08.913 23:12:36 -- common/autotest_common.sh@852 -- # return 0 00:30:08.913 23:12:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:08.913 23:12:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:08.913 23:12:36 -- common/autotest_common.sh@10 -- # set +x 00:30:08.913 23:12:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:08.913 23:12:37 -- host/digest.sh@120 -- # common_target_config 00:30:08.913 23:12:37 -- host/digest.sh@43 -- # rpc_cmd 00:30:08.913 23:12:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:08.913 23:12:37 -- common/autotest_common.sh@10 -- # set +x 00:30:09.173 null0 00:30:09.173 [2024-06-09 23:12:37.102293] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.173 [2024-06-09 23:12:37.126495] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.173 23:12:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:09.173 23:12:37 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:30:09.173 23:12:37 -- host/digest.sh@77 -- # local rw bs qd 00:30:09.174 23:12:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:09.174 23:12:37 -- host/digest.sh@80 -- # rw=randread 00:30:09.174 23:12:37 -- host/digest.sh@80 -- # bs=4096 00:30:09.174 23:12:37 -- host/digest.sh@80 -- # qd=128 00:30:09.174 23:12:37 -- host/digest.sh@82 -- # bperfpid=92463 00:30:09.174 23:12:37 -- host/digest.sh@83 -- # waitforlisten 92463 /var/tmp/bperf.sock 00:30:09.174 23:12:37 -- common/autotest_common.sh@819 -- # '[' -z 92463 ']' 00:30:09.174 23:12:37 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:09.174 23:12:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:09.174 23:12:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:09.174 23:12:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:09.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:09.174 23:12:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:09.174 23:12:37 -- common/autotest_common.sh@10 -- # set +x 00:30:09.174 [2024-06-09 23:12:37.177688] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:09.174 [2024-06-09 23:12:37.177736] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92463 ] 00:30:09.174 EAL: No free 2048 kB hugepages reported on node 1 00:30:09.174 [2024-06-09 23:12:37.234540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.174 [2024-06-09 23:12:37.296603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.116 23:12:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:10.116 23:12:37 -- common/autotest_common.sh@852 -- # return 0 00:30:10.116 23:12:37 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:10.116 23:12:37 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:10.116 23:12:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:10.116 23:12:38 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:10.116 23:12:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:10.377 nvme0n1 00:30:10.377 23:12:38 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:10.377 23:12:38 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:10.377 Running I/O for 2 seconds... 00:30:12.361 00:30:12.361 Latency(us) 00:30:12.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.361 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:12.361 nvme0n1 : 2.00 16529.75 64.57 0.00 0.00 7733.89 2648.75 18240.85 00:30:12.361 =================================================================================================================== 00:30:12.361 Total : 16529.75 64.57 0.00 0.00 7733.89 2648.75 18240.85 00:30:12.361 0 00:30:12.621 23:12:40 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:12.621 23:12:40 -- host/digest.sh@92 -- # get_accel_stats 00:30:12.621 23:12:40 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:12.621 23:12:40 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:12.621 | select(.opcode=="crc32c") 00:30:12.621 | "\(.module_name) \(.executed)"' 00:30:12.621 23:12:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:12.621 23:12:40 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:12.621 23:12:40 -- host/digest.sh@93 -- # exp_module=software 00:30:12.621 23:12:40 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:12.621 23:12:40 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:12.621 23:12:40 -- host/digest.sh@97 -- # killprocess 92463 00:30:12.621 23:12:40 -- common/autotest_common.sh@926 -- # '[' -z 92463 ']' 00:30:12.621 23:12:40 -- common/autotest_common.sh@930 -- # kill -0 92463 00:30:12.621 23:12:40 -- common/autotest_common.sh@931 -- # uname 00:30:12.621 23:12:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:12.621 23:12:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92463 00:30:12.621 23:12:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:12.621 23:12:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:12.621 23:12:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92463' 00:30:12.621 killing process with pid 92463 00:30:12.621 23:12:40 -- common/autotest_common.sh@945 -- # kill 92463 00:30:12.621 Received shutdown signal, test time was about 2.000000 seconds 00:30:12.621 00:30:12.621 Latency(us) 00:30:12.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.621 =================================================================================================================== 00:30:12.621 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:12.621 23:12:40 -- common/autotest_common.sh@950 -- # wait 92463 00:30:12.881 23:12:40 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:30:12.881 23:12:40 -- host/digest.sh@77 -- # local rw bs qd 00:30:12.881 23:12:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:12.881 23:12:40 -- host/digest.sh@80 -- # rw=randread 00:30:12.881 23:12:40 -- host/digest.sh@80 -- # bs=131072 00:30:12.881 23:12:40 -- host/digest.sh@80 -- # qd=16 00:30:12.881 23:12:40 -- host/digest.sh@82 -- # bperfpid=93158 00:30:12.881 23:12:40 -- host/digest.sh@83 -- # waitforlisten 93158 /var/tmp/bperf.sock 00:30:12.881 23:12:40 -- common/autotest_common.sh@819 -- # '[' -z 93158 ']' 00:30:12.881 23:12:40 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:12.881 23:12:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:12.881 23:12:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:12.881 23:12:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:12.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:12.881 23:12:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:12.881 23:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:12.881 [2024-06-09 23:12:40.926740] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:12.882 [2024-06-09 23:12:40.926793] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93158 ] 00:30:12.882 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:12.882 Zero copy mechanism will not be used. 00:30:12.882 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.882 [2024-06-09 23:12:40.985125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.882 [2024-06-09 23:12:41.045884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.821 23:12:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:13.821 23:12:41 -- common/autotest_common.sh@852 -- # return 0 00:30:13.821 23:12:41 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:13.821 23:12:41 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:13.821 23:12:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:13.822 23:12:41 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:13.822 23:12:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:14.081 nvme0n1 00:30:14.081 23:12:42 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:14.081 23:12:42 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:14.081 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:14.081 Zero copy mechanism will not be used. 00:30:14.081 Running I/O for 2 seconds... 00:30:16.621 00:30:16.621 Latency(us) 00:30:16.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.621 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:16.621 nvme0n1 : 2.01 1834.63 229.33 0.00 0.00 8717.81 6171.31 24029.87 00:30:16.621 =================================================================================================================== 00:30:16.621 Total : 1834.63 229.33 0.00 0.00 8717.81 6171.31 24029.87 00:30:16.621 0 00:30:16.621 23:12:44 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:16.621 23:12:44 -- host/digest.sh@92 -- # get_accel_stats 00:30:16.621 23:12:44 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:16.621 23:12:44 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:16.621 | select(.opcode=="crc32c") 00:30:16.621 | "\(.module_name) \(.executed)"' 00:30:16.621 23:12:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:16.621 23:12:44 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:16.621 23:12:44 -- host/digest.sh@93 -- # exp_module=software 00:30:16.621 23:12:44 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:16.621 23:12:44 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:16.621 23:12:44 -- host/digest.sh@97 -- # killprocess 93158 00:30:16.621 23:12:44 -- common/autotest_common.sh@926 -- # '[' -z 93158 ']' 00:30:16.621 23:12:44 -- common/autotest_common.sh@930 -- # kill -0 93158 00:30:16.621 23:12:44 -- common/autotest_common.sh@931 -- # uname 00:30:16.621 23:12:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:16.621 23:12:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93158 00:30:16.621 23:12:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:16.621 23:12:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:16.621 23:12:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93158' 00:30:16.621 killing process with pid 93158 00:30:16.621 23:12:44 -- common/autotest_common.sh@945 -- # kill 93158 00:30:16.621 Received shutdown signal, test time was about 2.000000 seconds 00:30:16.621 00:30:16.621 Latency(us) 00:30:16.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.621 =================================================================================================================== 00:30:16.621 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:16.621 23:12:44 -- common/autotest_common.sh@950 -- # wait 93158 00:30:16.621 23:12:44 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:30:16.621 23:12:44 -- host/digest.sh@77 -- # local rw bs qd 00:30:16.621 23:12:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:16.621 23:12:44 -- host/digest.sh@80 -- # rw=randwrite 00:30:16.621 23:12:44 -- host/digest.sh@80 -- # bs=4096 00:30:16.621 23:12:44 -- host/digest.sh@80 -- # qd=128 00:30:16.621 23:12:44 -- host/digest.sh@82 -- # bperfpid=93849 00:30:16.621 23:12:44 -- host/digest.sh@83 -- # waitforlisten 93849 /var/tmp/bperf.sock 00:30:16.621 23:12:44 -- common/autotest_common.sh@819 -- # '[' -z 93849 ']' 00:30:16.621 23:12:44 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:16.621 23:12:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:16.621 23:12:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:16.621 23:12:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:16.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:16.621 23:12:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:16.621 23:12:44 -- common/autotest_common.sh@10 -- # set +x 00:30:16.621 [2024-06-09 23:12:44.618384] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:16.621 [2024-06-09 23:12:44.618468] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93849 ] 00:30:16.621 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.621 [2024-06-09 23:12:44.677420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.621 [2024-06-09 23:12:44.738937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.190 23:12:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:17.190 23:12:45 -- common/autotest_common.sh@852 -- # return 0 00:30:17.190 23:12:45 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:17.190 23:12:45 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:17.190 23:12:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:17.451 23:12:45 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:17.451 23:12:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:18.022 nvme0n1 00:30:18.022 23:12:45 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:18.022 23:12:45 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:18.022 Running I/O for 2 seconds... 00:30:19.934 00:30:19.934 Latency(us) 00:30:19.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.934 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.934 nvme0n1 : 2.01 21885.24 85.49 0.00 0.00 5837.28 4423.68 23156.05 00:30:19.934 =================================================================================================================== 00:30:19.934 Total : 21885.24 85.49 0.00 0.00 5837.28 4423.68 23156.05 00:30:19.934 0 00:30:19.934 23:12:48 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:19.934 23:12:48 -- host/digest.sh@92 -- # get_accel_stats 00:30:19.934 23:12:48 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:19.934 23:12:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:19.934 23:12:48 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:19.934 | select(.opcode=="crc32c") 00:30:19.934 | "\(.module_name) \(.executed)"' 00:30:20.194 23:12:48 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:20.194 23:12:48 -- host/digest.sh@93 -- # exp_module=software 00:30:20.194 23:12:48 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:20.194 23:12:48 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:20.194 23:12:48 -- host/digest.sh@97 -- # killprocess 93849 00:30:20.194 23:12:48 -- common/autotest_common.sh@926 -- # '[' -z 93849 ']' 00:30:20.194 23:12:48 -- common/autotest_common.sh@930 -- # kill -0 93849 00:30:20.194 23:12:48 -- common/autotest_common.sh@931 -- # uname 00:30:20.194 23:12:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:20.194 23:12:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93849 00:30:20.194 23:12:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:20.194 23:12:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:20.194 23:12:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93849' 00:30:20.194 killing process with pid 93849 00:30:20.194 23:12:48 -- common/autotest_common.sh@945 -- # kill 93849 00:30:20.194 Received shutdown signal, test time was about 2.000000 seconds 00:30:20.194 00:30:20.194 Latency(us) 00:30:20.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.194 =================================================================================================================== 00:30:20.194 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:20.194 23:12:48 -- common/autotest_common.sh@950 -- # wait 93849 00:30:20.456 23:12:48 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:30:20.456 23:12:48 -- host/digest.sh@77 -- # local rw bs qd 00:30:20.456 23:12:48 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:20.456 23:12:48 -- host/digest.sh@80 -- # rw=randwrite 00:30:20.456 23:12:48 -- host/digest.sh@80 -- # bs=131072 00:30:20.456 23:12:48 -- host/digest.sh@80 -- # qd=16 00:30:20.456 23:12:48 -- host/digest.sh@82 -- # bperfpid=94540 00:30:20.456 23:12:48 -- host/digest.sh@83 -- # waitforlisten 94540 /var/tmp/bperf.sock 00:30:20.456 23:12:48 -- common/autotest_common.sh@819 -- # '[' -z 94540 ']' 00:30:20.456 23:12:48 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:20.456 23:12:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:20.456 23:12:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:20.456 23:12:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:20.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:20.456 23:12:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:20.456 23:12:48 -- common/autotest_common.sh@10 -- # set +x 00:30:20.456 [2024-06-09 23:12:48.470366] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:20.456 [2024-06-09 23:12:48.470424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94540 ] 00:30:20.456 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:20.456 Zero copy mechanism will not be used. 00:30:20.456 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.456 [2024-06-09 23:12:48.528826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.456 [2024-06-09 23:12:48.589277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.398 23:12:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:21.398 23:12:49 -- common/autotest_common.sh@852 -- # return 0 00:30:21.398 23:12:49 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:21.398 23:12:49 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:21.398 23:12:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:21.398 23:12:49 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:21.398 23:12:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:21.659 nvme0n1 00:30:21.659 23:12:49 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:21.659 23:12:49 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:21.659 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:21.659 Zero copy mechanism will not be used. 00:30:21.659 Running I/O for 2 seconds... 00:30:24.208 00:30:24.208 Latency(us) 00:30:24.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.208 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:24.208 nvme0n1 : 2.01 1857.53 232.19 0.00 0.00 8594.36 6280.53 32986.45 00:30:24.208 =================================================================================================================== 00:30:24.208 Total : 1857.53 232.19 0.00 0.00 8594.36 6280.53 32986.45 00:30:24.208 0 00:30:24.208 23:12:51 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:24.208 23:12:51 -- host/digest.sh@92 -- # get_accel_stats 00:30:24.208 23:12:51 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:24.208 23:12:51 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:24.208 | select(.opcode=="crc32c") 00:30:24.208 | "\(.module_name) \(.executed)"' 00:30:24.208 23:12:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:24.208 23:12:51 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:24.208 23:12:51 -- host/digest.sh@93 -- # exp_module=software 00:30:24.208 23:12:51 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:24.208 23:12:51 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:24.208 23:12:51 -- host/digest.sh@97 -- # killprocess 94540 00:30:24.208 23:12:51 -- common/autotest_common.sh@926 -- # '[' -z 94540 ']' 00:30:24.208 23:12:51 -- common/autotest_common.sh@930 -- # kill -0 94540 00:30:24.208 23:12:51 -- common/autotest_common.sh@931 -- # uname 00:30:24.208 23:12:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:24.208 23:12:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94540 00:30:24.208 23:12:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:24.208 23:12:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:24.208 23:12:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94540' 00:30:24.208 killing process with pid 94540 00:30:24.208 23:12:51 -- common/autotest_common.sh@945 -- # kill 94540 00:30:24.208 Received shutdown signal, test time was about 2.000000 seconds 00:30:24.208 00:30:24.208 Latency(us) 00:30:24.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.208 =================================================================================================================== 00:30:24.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:24.208 23:12:51 -- common/autotest_common.sh@950 -- # wait 94540 00:30:24.208 23:12:52 -- host/digest.sh@126 -- # killprocess 92112 00:30:24.208 23:12:52 -- common/autotest_common.sh@926 -- # '[' -z 92112 ']' 00:30:24.208 23:12:52 -- common/autotest_common.sh@930 -- # kill -0 92112 00:30:24.208 23:12:52 -- common/autotest_common.sh@931 -- # uname 00:30:24.208 23:12:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:24.208 23:12:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92112 00:30:24.208 23:12:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:24.208 23:12:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:24.208 23:12:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92112' 00:30:24.208 killing process with pid 92112 00:30:24.208 23:12:52 -- common/autotest_common.sh@945 -- # kill 92112 00:30:24.208 23:12:52 -- common/autotest_common.sh@950 -- # wait 92112 00:30:24.208 00:30:24.208 real 0m16.132s 00:30:24.208 user 0m31.658s 00:30:24.208 sys 0m3.046s 00:30:24.208 23:12:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:24.208 23:12:52 -- common/autotest_common.sh@10 -- # set +x 00:30:24.208 ************************************ 00:30:24.208 END TEST nvmf_digest_clean 00:30:24.208 ************************************ 00:30:24.208 23:12:52 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:30:24.208 23:12:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:24.208 23:12:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:24.208 23:12:52 -- common/autotest_common.sh@10 -- # set +x 00:30:24.208 ************************************ 00:30:24.208 START TEST nvmf_digest_error 00:30:24.208 ************************************ 00:30:24.208 23:12:52 -- common/autotest_common.sh@1104 -- # run_digest_error 00:30:24.208 23:12:52 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:30:24.208 23:12:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:24.208 23:12:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:24.208 23:12:52 -- common/autotest_common.sh@10 -- # set +x 00:30:24.208 23:12:52 -- nvmf/common.sh@469 -- # nvmfpid=95348 00:30:24.208 23:12:52 -- nvmf/common.sh@470 -- # waitforlisten 95348 00:30:24.208 23:12:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:24.208 23:12:52 -- common/autotest_common.sh@819 -- # '[' -z 95348 ']' 00:30:24.208 23:12:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.208 23:12:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:24.208 23:12:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.208 23:12:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:24.208 23:12:52 -- common/autotest_common.sh@10 -- # set +x 00:30:24.208 [2024-06-09 23:12:52.369597] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:24.208 [2024-06-09 23:12:52.369653] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.468 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.468 [2024-06-09 23:12:52.436659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.468 [2024-06-09 23:12:52.501636] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:24.468 [2024-06-09 23:12:52.501757] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.468 [2024-06-09 23:12:52.501766] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.468 [2024-06-09 23:12:52.501773] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.468 [2024-06-09 23:12:52.501792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.038 23:12:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:25.038 23:12:53 -- common/autotest_common.sh@852 -- # return 0 00:30:25.038 23:12:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:25.038 23:12:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:25.038 23:12:53 -- common/autotest_common.sh@10 -- # set +x 00:30:25.038 23:12:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:25.038 23:12:53 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:25.038 23:12:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:25.038 23:12:53 -- common/autotest_common.sh@10 -- # set +x 00:30:25.038 [2024-06-09 23:12:53.167696] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:25.038 23:12:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:25.038 23:12:53 -- host/digest.sh@104 -- # common_target_config 00:30:25.038 23:12:53 -- host/digest.sh@43 -- # rpc_cmd 00:30:25.038 23:12:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:25.038 23:12:53 -- common/autotest_common.sh@10 -- # set +x 00:30:25.300 null0 00:30:25.300 [2024-06-09 23:12:53.248409] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.300 [2024-06-09 23:12:53.272604] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:25.300 23:12:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:25.300 23:12:53 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:30:25.300 23:12:53 -- host/digest.sh@54 -- # local rw bs qd 00:30:25.300 23:12:53 -- host/digest.sh@56 -- # rw=randread 00:30:25.300 23:12:53 -- host/digest.sh@56 -- # bs=4096 00:30:25.300 23:12:53 -- host/digest.sh@56 -- # qd=128 00:30:25.300 23:12:53 -- host/digest.sh@58 -- # bperfpid=95609 00:30:25.300 23:12:53 -- host/digest.sh@60 -- # waitforlisten 95609 /var/tmp/bperf.sock 00:30:25.300 23:12:53 -- common/autotest_common.sh@819 -- # '[' -z 95609 ']' 00:30:25.300 23:12:53 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:25.300 23:12:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:25.300 23:12:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:25.300 23:12:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:25.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:25.300 23:12:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:25.300 23:12:53 -- common/autotest_common.sh@10 -- # set +x 00:30:25.300 [2024-06-09 23:12:53.331829] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:25.300 [2024-06-09 23:12:53.331899] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95609 ] 00:30:25.300 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.300 [2024-06-09 23:12:53.389606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.300 [2024-06-09 23:12:53.451256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.242 23:12:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:26.242 23:12:54 -- common/autotest_common.sh@852 -- # return 0 00:30:26.242 23:12:54 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:26.242 23:12:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:26.242 23:12:54 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:26.242 23:12:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:26.242 23:12:54 -- common/autotest_common.sh@10 -- # set +x 00:30:26.242 23:12:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:26.242 23:12:54 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:26.242 23:12:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:26.503 nvme0n1 00:30:26.503 23:12:54 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:26.503 23:12:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:26.503 23:12:54 -- common/autotest_common.sh@10 -- # set +x 00:30:26.503 23:12:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:26.503 23:12:54 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:26.503 23:12:54 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:26.503 Running I/O for 2 seconds... 00:30:26.765 [2024-06-09 23:12:54.692639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.692677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.692688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.703814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.703838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.703848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.716493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.716516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.716524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.727153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.727175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.727184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.740064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.740086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.740095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.751018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.751039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.751053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.763439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.763461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.763470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.773833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.773853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.773862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.786729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.786750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.786759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.798619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.798640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.798648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.809363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.809386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.809394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.821216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.821237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.821246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.832517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.832538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.832546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.843626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.843648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.843656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.855196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.855225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.855233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.866430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.866451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.866460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.878430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.878451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.878459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.889303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.889324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.889332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.900631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.900653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.900662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.912302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.912323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.912332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.923144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.765 [2024-06-09 23:12:54.923165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.765 [2024-06-09 23:12:54.923173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:26.765 [2024-06-09 23:12:54.934459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:26.766 [2024-06-09 23:12:54.934480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:26.766 [2024-06-09 23:12:54.934489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:54.946253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:54.946274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:54.946282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:54.957062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:54.957083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:54.957092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:54.968278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:54.968299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:54.968308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:54.980031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:54.980051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:54.980060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:54.990985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:54.991005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:54.991014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:55.002767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:55.002788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:55.002797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:55.013829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:55.013851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:55.013860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:55.025426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:55.025447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:55.025455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:55.036857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:55.036878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:55.036887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:55.047821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:55.047841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:55.047853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:55.059665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:55.059686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:55.059695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:55.070731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:55.070752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:55.070760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:55.082611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:55.082632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:55.082641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:55.093489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:55.093509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:55.093518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:55.104781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:55.104802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:55.104811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:55.116414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:55.116434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:55.116443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:55.127534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:55.127555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.028 [2024-06-09 23:12:55.127564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.028 [2024-06-09 23:12:55.138536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.028 [2024-06-09 23:12:55.138557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.029 [2024-06-09 23:12:55.138566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.029 [2024-06-09 23:12:55.151245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.029 [2024-06-09 23:12:55.151266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.029 [2024-06-09 23:12:55.151275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.029 [2024-06-09 23:12:55.161901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.029 [2024-06-09 23:12:55.161922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.029 [2024-06-09 23:12:55.161931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.029 [2024-06-09 23:12:55.173943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.029 [2024-06-09 23:12:55.173964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.029 [2024-06-09 23:12:55.173973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.029 [2024-06-09 23:12:55.186924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.029 [2024-06-09 23:12:55.186945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.029 [2024-06-09 23:12:55.186954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.029 [2024-06-09 23:12:55.197566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.029 [2024-06-09 23:12:55.197587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.029 [2024-06-09 23:12:55.197596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.210480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.210501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.210510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.221314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.221335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.221344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.233163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.233184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.233192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.244072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.244092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.244105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.255960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.255981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.255990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.266723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.266744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.266753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.278448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.278468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.278477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.289396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.289421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.289430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.301306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.301327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.301335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.312156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.312177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.312185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.323436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.323457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.323465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.335003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.335024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.335033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.346352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.346376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.346385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.357982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.358003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.358012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.368736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.368757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.368766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.380512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.380533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.380542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.391872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.291 [2024-06-09 23:12:55.391892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.291 [2024-06-09 23:12:55.391902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.291 [2024-06-09 23:12:55.402707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.292 [2024-06-09 23:12:55.402727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.292 [2024-06-09 23:12:55.402736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.292 [2024-06-09 23:12:55.414601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.292 [2024-06-09 23:12:55.414621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.292 [2024-06-09 23:12:55.414631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.292 [2024-06-09 23:12:55.425384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.292 [2024-06-09 23:12:55.425408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.292 [2024-06-09 23:12:55.425417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.292 [2024-06-09 23:12:55.437149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.292 [2024-06-09 23:12:55.437170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.292 [2024-06-09 23:12:55.437180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.292 [2024-06-09 23:12:55.448412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.292 [2024-06-09 23:12:55.448433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.292 [2024-06-09 23:12:55.448441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.292 [2024-06-09 23:12:55.459307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.292 [2024-06-09 23:12:55.459328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.292 [2024-06-09 23:12:55.459336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.553 [2024-06-09 23:12:55.471077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.553 [2024-06-09 23:12:55.471098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.553 [2024-06-09 23:12:55.471107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.553 [2024-06-09 23:12:55.482147] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.553 [2024-06-09 23:12:55.482168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.553 [2024-06-09 23:12:55.482176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.553 [2024-06-09 23:12:55.493929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.553 [2024-06-09 23:12:55.493951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.553 [2024-06-09 23:12:55.493960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.553 [2024-06-09 23:12:55.504753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.553 [2024-06-09 23:12:55.504774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.553 [2024-06-09 23:12:55.504783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.553 [2024-06-09 23:12:55.516423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.553 [2024-06-09 23:12:55.516444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.553 [2024-06-09 23:12:55.516452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.553 [2024-06-09 23:12:55.527441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.553 [2024-06-09 23:12:55.527462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.553 [2024-06-09 23:12:55.527470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.553 [2024-06-09 23:12:55.539223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.553 [2024-06-09 23:12:55.539243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.553 [2024-06-09 23:12:55.539256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.553 [2024-06-09 23:12:55.550248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.553 [2024-06-09 23:12:55.550268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.553 [2024-06-09 23:12:55.550277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.553 [2024-06-09 23:12:55.561483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.553 [2024-06-09 23:12:55.561504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.553 [2024-06-09 23:12:55.561512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.554 [2024-06-09 23:12:55.573015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.554 [2024-06-09 23:12:55.573036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.554 [2024-06-09 23:12:55.573045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.554 [2024-06-09 23:12:55.584381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.554 [2024-06-09 23:12:55.584407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.554 [2024-06-09 23:12:55.584416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.554 [2024-06-09 23:12:55.595865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.554 [2024-06-09 23:12:55.595886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.554 [2024-06-09 23:12:55.595894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.554 [2024-06-09 23:12:55.606840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.554 [2024-06-09 23:12:55.606861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.554 [2024-06-09 23:12:55.606870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.554 [2024-06-09 23:12:55.618071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.554 [2024-06-09 23:12:55.618093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.554 [2024-06-09 23:12:55.618101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.554 [2024-06-09 23:12:55.629801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.554 [2024-06-09 23:12:55.629822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.554 [2024-06-09 23:12:55.629831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.554 [2024-06-09 23:12:55.640889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.554 [2024-06-09 23:12:55.640915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.554 [2024-06-09 23:12:55.640923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.554 [2024-06-09 23:12:55.652603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.554 [2024-06-09 23:12:55.652624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.554 [2024-06-09 23:12:55.652633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.554 [2024-06-09 23:12:55.663603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.554 [2024-06-09 23:12:55.663625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.554 [2024-06-09 23:12:55.663634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.554 [2024-06-09 23:12:55.675221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.554 [2024-06-09 23:12:55.675242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.554 [2024-06-09 23:12:55.675251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.554 [2024-06-09 23:12:55.686423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.554 [2024-06-09 23:12:55.686444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.554 [2024-06-09 23:12:55.686453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.554 [2024-06-09 23:12:55.698127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.554 [2024-06-09 23:12:55.698148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.554 [2024-06-09 23:12:55.698157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.554 [2024-06-09 23:12:55.709120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.554 [2024-06-09 23:12:55.709141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.554 [2024-06-09 23:12:55.709150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.554 [2024-06-09 23:12:55.720822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.554 [2024-06-09 23:12:55.720843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.554 [2024-06-09 23:12:55.720852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.731690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.731712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.731720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.742943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.742964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.742973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.754378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.754399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.754413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.765474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.765495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.765504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.777231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.777251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.777260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.788290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.788312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.788321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.800083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.800105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.800113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.811044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.811066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.811075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.822794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.822815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.822824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.833782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.833804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.833816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.844892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.844914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.844923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.856518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.856539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.856548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.867783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.867805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.867814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.879458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.879479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.879488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.890653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.890674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.890682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.902261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.902281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.902290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.913246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.913268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.913277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.924342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.924363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.924372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.936053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.936078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.936087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.947487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.947508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.947517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.958235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.958255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.958264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.816 [2024-06-09 23:12:55.970084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.816 [2024-06-09 23:12:55.970105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.816 [2024-06-09 23:12:55.970114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.817 [2024-06-09 23:12:55.981225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.817 [2024-06-09 23:12:55.981247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.817 [2024-06-09 23:12:55.981256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:27.817 [2024-06-09 23:12:55.992209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:27.817 [2024-06-09 23:12:55.992230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.817 [2024-06-09 23:12:55.992239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.077 [2024-06-09 23:12:56.004110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.077 [2024-06-09 23:12:56.004131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.077 [2024-06-09 23:12:56.004140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.077 [2024-06-09 23:12:56.015148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.077 [2024-06-09 23:12:56.015169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.015177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.026796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.026817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.026829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.037572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.037593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.037602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.049064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.049085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.049094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.060771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.060791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.060800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.071648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.071670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.071679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.083640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.083662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.083671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.094690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.094712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.094720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.106475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.106496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.106505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.117081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.117102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.117110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.129068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.129093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.129101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.139894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.139915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.139924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.150976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.150997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.151006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.162649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.162672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.162681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.173802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.173823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.173832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.184840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.184861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.184871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.196780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.196801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.196809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.207678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.207699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.207708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.219330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.219352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.219361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.230525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.230547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.230556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.241541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.241563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.241571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.078 [2024-06-09 23:12:56.253315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.078 [2024-06-09 23:12:56.253336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.078 [2024-06-09 23:12:56.253344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.340 [2024-06-09 23:12:56.264253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.340 [2024-06-09 23:12:56.264275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.340 [2024-06-09 23:12:56.264284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.340 [2024-06-09 23:12:56.276106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.340 [2024-06-09 23:12:56.276127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.340 [2024-06-09 23:12:56.276136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.340 [2024-06-09 23:12:56.287184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.340 [2024-06-09 23:12:56.287205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.340 [2024-06-09 23:12:56.287214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.340 [2024-06-09 23:12:56.298166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.340 [2024-06-09 23:12:56.298188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.340 [2024-06-09 23:12:56.298196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.340 [2024-06-09 23:12:56.309984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.340 [2024-06-09 23:12:56.310005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.340 [2024-06-09 23:12:56.310014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.340 [2024-06-09 23:12:56.321029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.340 [2024-06-09 23:12:56.321049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.340 [2024-06-09 23:12:56.321066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.340 [2024-06-09 23:12:56.332596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.340 [2024-06-09 23:12:56.332617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.340 [2024-06-09 23:12:56.332626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.340 [2024-06-09 23:12:56.343728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.340 [2024-06-09 23:12:56.343749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.340 [2024-06-09 23:12:56.343757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.341 [2024-06-09 23:12:56.355272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.341 [2024-06-09 23:12:56.355291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.341 [2024-06-09 23:12:56.355300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.341 [2024-06-09 23:12:56.366272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.341 [2024-06-09 23:12:56.366293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.341 [2024-06-09 23:12:56.366301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.341 [2024-06-09 23:12:56.377998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.341 [2024-06-09 23:12:56.378019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.341 [2024-06-09 23:12:56.378028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.341 [2024-06-09 23:12:56.389923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.341 [2024-06-09 23:12:56.389944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.341 [2024-06-09 23:12:56.389953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.341 [2024-06-09 23:12:56.399927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.341 [2024-06-09 23:12:56.399949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.341 [2024-06-09 23:12:56.399957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.341 [2024-06-09 23:12:56.412187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.341 [2024-06-09 23:12:56.412209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.341 [2024-06-09 23:12:56.412218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.341 [2024-06-09 23:12:56.422286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.341 [2024-06-09 23:12:56.422311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.341 [2024-06-09 23:12:56.422319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.341 [2024-06-09 23:12:56.435223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.341 [2024-06-09 23:12:56.435244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.341 [2024-06-09 23:12:56.435252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.341 [2024-06-09 23:12:56.450515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.341 [2024-06-09 23:12:56.450536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.341 [2024-06-09 23:12:56.450545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.341 [2024-06-09 23:12:56.461764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.341 [2024-06-09 23:12:56.461785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.341 [2024-06-09 23:12:56.461794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.341 [2024-06-09 23:12:56.475507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.341 [2024-06-09 23:12:56.475528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.341 [2024-06-09 23:12:56.475537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.341 [2024-06-09 23:12:56.486412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.341 [2024-06-09 23:12:56.486433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.341 [2024-06-09 23:12:56.486441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.341 [2024-06-09 23:12:56.498324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.341 [2024-06-09 23:12:56.498345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.341 [2024-06-09 23:12:56.498354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.341 [2024-06-09 23:12:56.510041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.341 [2024-06-09 23:12:56.510062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.341 [2024-06-09 23:12:56.510070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.602 [2024-06-09 23:12:56.521205] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.602 [2024-06-09 23:12:56.521226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.602 [2024-06-09 23:12:56.521234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.602 [2024-06-09 23:12:56.532907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.602 [2024-06-09 23:12:56.532929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.602 [2024-06-09 23:12:56.532938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.602 [2024-06-09 23:12:56.544075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.602 [2024-06-09 23:12:56.544096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.602 [2024-06-09 23:12:56.544105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.602 [2024-06-09 23:12:56.554913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.602 [2024-06-09 23:12:56.554933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.602 [2024-06-09 23:12:56.554941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.602 [2024-06-09 23:12:56.566988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.603 [2024-06-09 23:12:56.567008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.603 [2024-06-09 23:12:56.567017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.603 [2024-06-09 23:12:56.577945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.603 [2024-06-09 23:12:56.577966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.603 [2024-06-09 23:12:56.577974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.603 [2024-06-09 23:12:56.589766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.603 [2024-06-09 23:12:56.589787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.603 [2024-06-09 23:12:56.589795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.603 [2024-06-09 23:12:56.599594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.603 [2024-06-09 23:12:56.599615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.603 [2024-06-09 23:12:56.599623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.603 [2024-06-09 23:12:56.612124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.603 [2024-06-09 23:12:56.612145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.603 [2024-06-09 23:12:56.612153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.603 [2024-06-09 23:12:56.625050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.603 [2024-06-09 23:12:56.625073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.603 [2024-06-09 23:12:56.625086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.603 [2024-06-09 23:12:56.637178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.603 [2024-06-09 23:12:56.637199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.603 [2024-06-09 23:12:56.637208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.603 [2024-06-09 23:12:56.655057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.603 [2024-06-09 23:12:56.655078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.603 [2024-06-09 23:12:56.655087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.603 [2024-06-09 23:12:56.666513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.603 [2024-06-09 23:12:56.666534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.603 [2024-06-09 23:12:56.666543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.603 [2024-06-09 23:12:56.677766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6ab600) 00:30:28.603 [2024-06-09 23:12:56.677787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.603 [2024-06-09 23:12:56.677796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.603 00:30:28.603 Latency(us) 00:30:28.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.603 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:28.603 nvme0n1 : 2.01 22143.70 86.50 0.00 0.00 5771.54 3099.31 17585.49 00:30:28.603 =================================================================================================================== 00:30:28.603 Total : 22143.70 86.50 0.00 0.00 5771.54 3099.31 17585.49 00:30:28.603 0 00:30:28.603 23:12:56 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:28.603 23:12:56 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:28.603 23:12:56 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:28.603 | .driver_specific 00:30:28.603 | .nvme_error 00:30:28.603 | .status_code 00:30:28.603 | .command_transient_transport_error' 00:30:28.603 23:12:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:28.865 23:12:56 -- host/digest.sh@71 -- # (( 174 > 0 )) 00:30:28.865 23:12:56 -- host/digest.sh@73 -- # killprocess 95609 00:30:28.865 23:12:56 -- common/autotest_common.sh@926 -- # '[' -z 95609 ']' 00:30:28.865 23:12:56 -- common/autotest_common.sh@930 -- # kill -0 95609 00:30:28.865 23:12:56 -- common/autotest_common.sh@931 -- # uname 00:30:28.865 23:12:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:28.865 23:12:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95609 00:30:28.865 23:12:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:28.865 23:12:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:28.865 23:12:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95609' 00:30:28.865 killing process with pid 95609 00:30:28.865 23:12:56 -- common/autotest_common.sh@945 -- # kill 95609 00:30:28.865 Received shutdown signal, test time was about 2.000000 seconds 00:30:28.865 00:30:28.865 Latency(us) 00:30:28.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.865 =================================================================================================================== 00:30:28.865 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:28.865 23:12:56 -- common/autotest_common.sh@950 -- # wait 95609 00:30:29.127 23:12:57 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:30:29.127 23:12:57 -- host/digest.sh@54 -- # local rw bs qd 00:30:29.127 23:12:57 -- host/digest.sh@56 -- # rw=randread 00:30:29.127 23:12:57 -- host/digest.sh@56 -- # bs=131072 00:30:29.127 23:12:57 -- host/digest.sh@56 -- # qd=16 00:30:29.127 23:12:57 -- host/digest.sh@58 -- # bperfpid=96306 00:30:29.127 23:12:57 -- host/digest.sh@60 -- # waitforlisten 96306 /var/tmp/bperf.sock 00:30:29.127 23:12:57 -- common/autotest_common.sh@819 -- # '[' -z 96306 ']' 00:30:29.127 23:12:57 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:29.127 23:12:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:29.127 23:12:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:29.127 23:12:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:29.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:29.127 23:12:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:29.127 23:12:57 -- common/autotest_common.sh@10 -- # set +x 00:30:29.127 [2024-06-09 23:12:57.087106] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:29.127 [2024-06-09 23:12:57.087158] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96306 ] 00:30:29.127 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:29.127 Zero copy mechanism will not be used. 00:30:29.127 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.127 [2024-06-09 23:12:57.145352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.127 [2024-06-09 23:12:57.206327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.698 23:12:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:29.698 23:12:57 -- common/autotest_common.sh@852 -- # return 0 00:30:29.698 23:12:57 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:29.698 23:12:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:29.958 23:12:57 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:29.958 23:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:29.958 23:12:57 -- common/autotest_common.sh@10 -- # set +x 00:30:29.958 23:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:29.958 23:12:58 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:29.958 23:12:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:30.219 nvme0n1 00:30:30.219 23:12:58 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:30.219 23:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:30.219 23:12:58 -- common/autotest_common.sh@10 -- # set +x 00:30:30.219 23:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:30.219 23:12:58 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:30.219 23:12:58 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:30.219 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:30.219 Zero copy mechanism will not be used. 00:30:30.219 Running I/O for 2 seconds... 00:30:30.219 [2024-06-09 23:12:58.393901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.219 [2024-06-09 23:12:58.393943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.219 [2024-06-09 23:12:58.393955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.481 [2024-06-09 23:12:58.410926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.481 [2024-06-09 23:12:58.410953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.481 [2024-06-09 23:12:58.410963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.481 [2024-06-09 23:12:58.428612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.481 [2024-06-09 23:12:58.428635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.481 [2024-06-09 23:12:58.428645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.481 [2024-06-09 23:12:58.446940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.481 [2024-06-09 23:12:58.446962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.481 [2024-06-09 23:12:58.446970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.481 [2024-06-09 23:12:58.464012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.481 [2024-06-09 23:12:58.464035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.481 [2024-06-09 23:12:58.464043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.481 [2024-06-09 23:12:58.484391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.481 [2024-06-09 23:12:58.484417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.481 [2024-06-09 23:12:58.484426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.481 [2024-06-09 23:12:58.503142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.481 [2024-06-09 23:12:58.503164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.481 [2024-06-09 23:12:58.503173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.481 [2024-06-09 23:12:58.520881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.481 [2024-06-09 23:12:58.520902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.481 [2024-06-09 23:12:58.520911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.481 [2024-06-09 23:12:58.536824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.481 [2024-06-09 23:12:58.536846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.481 [2024-06-09 23:12:58.536855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.481 [2024-06-09 23:12:58.554292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.481 [2024-06-09 23:12:58.554314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.481 [2024-06-09 23:12:58.554323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.481 [2024-06-09 23:12:58.574281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.481 [2024-06-09 23:12:58.574302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.481 [2024-06-09 23:12:58.574310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.481 [2024-06-09 23:12:58.590253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.481 [2024-06-09 23:12:58.590275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.481 [2024-06-09 23:12:58.590284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.481 [2024-06-09 23:12:58.611131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.481 [2024-06-09 23:12:58.611152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.481 [2024-06-09 23:12:58.611161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.481 [2024-06-09 23:12:58.628103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.481 [2024-06-09 23:12:58.628125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.481 [2024-06-09 23:12:58.628134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.481 [2024-06-09 23:12:58.646571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.481 [2024-06-09 23:12:58.646592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.481 [2024-06-09 23:12:58.646601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.743 [2024-06-09 23:12:58.665661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.743 [2024-06-09 23:12:58.665682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.743 [2024-06-09 23:12:58.665691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.743 [2024-06-09 23:12:58.683026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.743 [2024-06-09 23:12:58.683047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.743 [2024-06-09 23:12:58.683056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.743 [2024-06-09 23:12:58.700658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.743 [2024-06-09 23:12:58.700683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.743 [2024-06-09 23:12:58.700692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.743 [2024-06-09 23:12:58.718987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.743 [2024-06-09 23:12:58.719009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.743 [2024-06-09 23:12:58.719018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.743 [2024-06-09 23:12:58.734459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.743 [2024-06-09 23:12:58.734481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.743 [2024-06-09 23:12:58.734490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.743 [2024-06-09 23:12:58.754480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.743 [2024-06-09 23:12:58.754502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.743 [2024-06-09 23:12:58.754511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.743 [2024-06-09 23:12:58.772286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.743 [2024-06-09 23:12:58.772308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.743 [2024-06-09 23:12:58.772317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.743 [2024-06-09 23:12:58.788685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.743 [2024-06-09 23:12:58.788706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.743 [2024-06-09 23:12:58.788715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.743 [2024-06-09 23:12:58.805528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.743 [2024-06-09 23:12:58.805550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.743 [2024-06-09 23:12:58.805558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.743 [2024-06-09 23:12:58.822266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.743 [2024-06-09 23:12:58.822288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.743 [2024-06-09 23:12:58.822296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.743 [2024-06-09 23:12:58.838968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.743 [2024-06-09 23:12:58.838989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.743 [2024-06-09 23:12:58.838998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:30.743 [2024-06-09 23:12:58.856121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.744 [2024-06-09 23:12:58.856143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.744 [2024-06-09 23:12:58.856151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:30.744 [2024-06-09 23:12:58.874656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.744 [2024-06-09 23:12:58.874678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.744 [2024-06-09 23:12:58.874687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.744 [2024-06-09 23:12:58.893585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.744 [2024-06-09 23:12:58.893607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.744 [2024-06-09 23:12:58.893615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:30.744 [2024-06-09 23:12:58.910463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:30.744 [2024-06-09 23:12:58.910485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.744 [2024-06-09 23:12:58.910494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.008 [2024-06-09 23:12:58.927797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.008 [2024-06-09 23:12:58.927818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.008 [2024-06-09 23:12:58.927826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.008 [2024-06-09 23:12:58.946431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.008 [2024-06-09 23:12:58.946452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.008 [2024-06-09 23:12:58.946461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.008 [2024-06-09 23:12:58.964276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.008 [2024-06-09 23:12:58.964297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.008 [2024-06-09 23:12:58.964306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.008 [2024-06-09 23:12:58.980219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.008 [2024-06-09 23:12:58.980240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.008 [2024-06-09 23:12:58.980248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.008 [2024-06-09 23:12:58.998535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.008 [2024-06-09 23:12:58.998557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.008 [2024-06-09 23:12:58.998569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.008 [2024-06-09 23:12:59.016558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.008 [2024-06-09 23:12:59.016579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.008 [2024-06-09 23:12:59.016587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.008 [2024-06-09 23:12:59.033992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.008 [2024-06-09 23:12:59.034012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.008 [2024-06-09 23:12:59.034021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.008 [2024-06-09 23:12:59.054802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.008 [2024-06-09 23:12:59.054823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.008 [2024-06-09 23:12:59.054832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.008 [2024-06-09 23:12:59.072435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.008 [2024-06-09 23:12:59.072456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.008 [2024-06-09 23:12:59.072465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.008 [2024-06-09 23:12:59.089324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.008 [2024-06-09 23:12:59.089346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.008 [2024-06-09 23:12:59.089355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.008 [2024-06-09 23:12:59.106058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.008 [2024-06-09 23:12:59.106080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.008 [2024-06-09 23:12:59.106089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.008 [2024-06-09 23:12:59.122909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.008 [2024-06-09 23:12:59.122931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.008 [2024-06-09 23:12:59.122940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.008 [2024-06-09 23:12:59.140269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.008 [2024-06-09 23:12:59.140291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.008 [2024-06-09 23:12:59.140300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.008 [2024-06-09 23:12:59.157831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.008 [2024-06-09 23:12:59.157859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.008 [2024-06-09 23:12:59.157868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.008 [2024-06-09 23:12:59.176800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.008 [2024-06-09 23:12:59.176822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.008 [2024-06-09 23:12:59.176830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.194051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.194073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.194082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.211338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.211360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.211369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.230215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.230237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.230245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.246498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.246520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.246529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.261993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.262016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.262024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.280583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.280605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.280613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.297711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.297733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.297741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.316736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.316758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.316766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.333490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.333512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.333521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.349967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.349988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.349997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.370387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.370413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.370422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.390114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.390136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.390145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.408191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.408212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.408221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.424766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.424787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.424796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.441472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.441494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.441502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.459552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.459578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.459586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.476827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.476849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.476858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.492570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.492592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.492600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.333 [2024-06-09 23:12:59.509950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.333 [2024-06-09 23:12:59.509972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.333 [2024-06-09 23:12:59.509980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.595 [2024-06-09 23:12:59.529127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.595 [2024-06-09 23:12:59.529150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.595 [2024-06-09 23:12:59.529158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.595 [2024-06-09 23:12:59.547729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.595 [2024-06-09 23:12:59.547751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.595 [2024-06-09 23:12:59.547759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.595 [2024-06-09 23:12:59.562939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.595 [2024-06-09 23:12:59.562961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.595 [2024-06-09 23:12:59.562970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.595 [2024-06-09 23:12:59.582305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.595 [2024-06-09 23:12:59.582327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.595 [2024-06-09 23:12:59.582335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.595 [2024-06-09 23:12:59.598495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.595 [2024-06-09 23:12:59.598516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.595 [2024-06-09 23:12:59.598525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.595 [2024-06-09 23:12:59.618629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.595 [2024-06-09 23:12:59.618650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.595 [2024-06-09 23:12:59.618658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.595 [2024-06-09 23:12:59.638097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.595 [2024-06-09 23:12:59.638119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.595 [2024-06-09 23:12:59.638128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.595 [2024-06-09 23:12:59.655084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.595 [2024-06-09 23:12:59.655105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.595 [2024-06-09 23:12:59.655114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.595 [2024-06-09 23:12:59.676783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.595 [2024-06-09 23:12:59.676805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.595 [2024-06-09 23:12:59.676814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.595 [2024-06-09 23:12:59.693335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.595 [2024-06-09 23:12:59.693357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.595 [2024-06-09 23:12:59.693366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.595 [2024-06-09 23:12:59.709857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.595 [2024-06-09 23:12:59.709879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.595 [2024-06-09 23:12:59.709887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.595 [2024-06-09 23:12:59.728868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.595 [2024-06-09 23:12:59.728890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.595 [2024-06-09 23:12:59.728899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.595 [2024-06-09 23:12:59.746391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.595 [2024-06-09 23:12:59.746418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.595 [2024-06-09 23:12:59.746426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.595 [2024-06-09 23:12:59.765610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.595 [2024-06-09 23:12:59.765631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.595 [2024-06-09 23:12:59.765643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.858 [2024-06-09 23:12:59.785585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.858 [2024-06-09 23:12:59.785607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.858 [2024-06-09 23:12:59.785616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.858 [2024-06-09 23:12:59.801503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.858 [2024-06-09 23:12:59.801525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.858 [2024-06-09 23:12:59.801533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.858 [2024-06-09 23:12:59.821916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.858 [2024-06-09 23:12:59.821938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.858 [2024-06-09 23:12:59.821946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.858 [2024-06-09 23:12:59.839624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.858 [2024-06-09 23:12:59.839646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.858 [2024-06-09 23:12:59.839654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.858 [2024-06-09 23:12:59.856174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.858 [2024-06-09 23:12:59.856196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.858 [2024-06-09 23:12:59.856204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.858 [2024-06-09 23:12:59.873961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.858 [2024-06-09 23:12:59.873982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.858 [2024-06-09 23:12:59.873991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.858 [2024-06-09 23:12:59.892434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.858 [2024-06-09 23:12:59.892456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.858 [2024-06-09 23:12:59.892464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.858 [2024-06-09 23:12:59.908151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.858 [2024-06-09 23:12:59.908173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.858 [2024-06-09 23:12:59.908181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.858 [2024-06-09 23:12:59.925688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.858 [2024-06-09 23:12:59.925713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.858 [2024-06-09 23:12:59.925721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.858 [2024-06-09 23:12:59.943662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.858 [2024-06-09 23:12:59.943683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.858 [2024-06-09 23:12:59.943691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.858 [2024-06-09 23:12:59.963280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.858 [2024-06-09 23:12:59.963301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.858 [2024-06-09 23:12:59.963310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:31.858 [2024-06-09 23:12:59.981160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.858 [2024-06-09 23:12:59.981181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.858 [2024-06-09 23:12:59.981190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:31.858 [2024-06-09 23:12:59.998349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.858 [2024-06-09 23:12:59.998370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.858 [2024-06-09 23:12:59.998379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:31.858 [2024-06-09 23:13:00.017012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.858 [2024-06-09 23:13:00.017035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.858 [2024-06-09 23:13:00.017045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:31.858 [2024-06-09 23:13:00.033902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:31.858 [2024-06-09 23:13:00.033924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:31.858 [2024-06-09 23:13:00.033932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.120 [2024-06-09 23:13:00.051444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.120 [2024-06-09 23:13:00.051467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.120 [2024-06-09 23:13:00.051476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.120 [2024-06-09 23:13:00.068875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.120 [2024-06-09 23:13:00.068897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.120 [2024-06-09 23:13:00.068909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.120 [2024-06-09 23:13:00.085994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.120 [2024-06-09 23:13:00.086015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.120 [2024-06-09 23:13:00.086024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.120 [2024-06-09 23:13:00.102800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.120 [2024-06-09 23:13:00.102822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.120 [2024-06-09 23:13:00.102830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.120 [2024-06-09 23:13:00.121154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.120 [2024-06-09 23:13:00.121175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.120 [2024-06-09 23:13:00.121184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.120 [2024-06-09 23:13:00.136891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.120 [2024-06-09 23:13:00.136913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.120 [2024-06-09 23:13:00.136921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.120 [2024-06-09 23:13:00.154892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.120 [2024-06-09 23:13:00.154913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.120 [2024-06-09 23:13:00.154922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.120 [2024-06-09 23:13:00.172151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.120 [2024-06-09 23:13:00.172173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.120 [2024-06-09 23:13:00.172181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.120 [2024-06-09 23:13:00.189209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.120 [2024-06-09 23:13:00.189230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.120 [2024-06-09 23:13:00.189239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.120 [2024-06-09 23:13:00.207700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.120 [2024-06-09 23:13:00.207722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.120 [2024-06-09 23:13:00.207731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.120 [2024-06-09 23:13:00.224700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.120 [2024-06-09 23:13:00.224725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.120 [2024-06-09 23:13:00.224734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.120 [2024-06-09 23:13:00.241878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.120 [2024-06-09 23:13:00.241899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.120 [2024-06-09 23:13:00.241908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.120 [2024-06-09 23:13:00.258321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.120 [2024-06-09 23:13:00.258342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.120 [2024-06-09 23:13:00.258350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.120 [2024-06-09 23:13:00.276837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.120 [2024-06-09 23:13:00.276858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.120 [2024-06-09 23:13:00.276866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.120 [2024-06-09 23:13:00.294752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.120 [2024-06-09 23:13:00.294774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.120 [2024-06-09 23:13:00.294782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.383 [2024-06-09 23:13:00.316827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.383 [2024-06-09 23:13:00.316848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.383 [2024-06-09 23:13:00.316857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.383 [2024-06-09 23:13:00.333463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.383 [2024-06-09 23:13:00.333485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.383 [2024-06-09 23:13:00.333493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.383 [2024-06-09 23:13:00.350861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.383 [2024-06-09 23:13:00.350883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.383 [2024-06-09 23:13:00.350892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.383 [2024-06-09 23:13:00.368102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4c000) 00:30:32.383 [2024-06-09 23:13:00.368123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.383 [2024-06-09 23:13:00.368132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.383 00:30:32.383 Latency(us) 00:30:32.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.383 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:32.383 nvme0n1 : 2.01 1736.37 217.05 0.00 0.00 9209.77 7318.19 23920.64 00:30:32.383 =================================================================================================================== 00:30:32.383 Total : 1736.37 217.05 0.00 0.00 9209.77 7318.19 23920.64 00:30:32.383 0 00:30:32.383 23:13:00 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:32.383 23:13:00 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:32.383 23:13:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:32.383 23:13:00 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:32.383 | .driver_specific 00:30:32.383 | .nvme_error 00:30:32.383 | .status_code 00:30:32.383 | .command_transient_transport_error' 00:30:32.383 23:13:00 -- host/digest.sh@71 -- # (( 112 > 0 )) 00:30:32.383 23:13:00 -- host/digest.sh@73 -- # killprocess 96306 00:30:32.383 23:13:00 -- common/autotest_common.sh@926 -- # '[' -z 96306 ']' 00:30:32.383 23:13:00 -- common/autotest_common.sh@930 -- # kill -0 96306 00:30:32.383 23:13:00 -- common/autotest_common.sh@931 -- # uname 00:30:32.644 23:13:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:32.644 23:13:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96306 00:30:32.644 23:13:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:32.644 23:13:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:32.644 23:13:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96306' 00:30:32.644 killing process with pid 96306 00:30:32.644 23:13:00 -- common/autotest_common.sh@945 -- # kill 96306 00:30:32.644 Received shutdown signal, test time was about 2.000000 seconds 00:30:32.644 00:30:32.644 Latency(us) 00:30:32.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.644 =================================================================================================================== 00:30:32.645 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:32.645 23:13:00 -- common/autotest_common.sh@950 -- # wait 96306 00:30:32.645 23:13:00 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:30:32.645 23:13:00 -- host/digest.sh@54 -- # local rw bs qd 00:30:32.645 23:13:00 -- host/digest.sh@56 -- # rw=randwrite 00:30:32.645 23:13:00 -- host/digest.sh@56 -- # bs=4096 00:30:32.645 23:13:00 -- host/digest.sh@56 -- # qd=128 00:30:32.645 23:13:00 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:32.645 23:13:00 -- host/digest.sh@58 -- # bperfpid=96992 00:30:32.645 23:13:00 -- host/digest.sh@60 -- # waitforlisten 96992 /var/tmp/bperf.sock 00:30:32.645 23:13:00 -- common/autotest_common.sh@819 -- # '[' -z 96992 ']' 00:30:32.645 23:13:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:32.645 23:13:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:32.645 23:13:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:32.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:32.645 23:13:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:32.645 23:13:00 -- common/autotest_common.sh@10 -- # set +x 00:30:32.645 [2024-06-09 23:13:00.769432] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:32.645 [2024-06-09 23:13:00.769486] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96992 ] 00:30:32.645 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.905 [2024-06-09 23:13:00.826433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.905 [2024-06-09 23:13:00.887987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.477 23:13:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:33.477 23:13:01 -- common/autotest_common.sh@852 -- # return 0 00:30:33.477 23:13:01 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:33.477 23:13:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:33.477 23:13:01 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:33.477 23:13:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:33.477 23:13:01 -- common/autotest_common.sh@10 -- # set +x 00:30:33.477 23:13:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:33.477 23:13:01 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:33.477 23:13:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:34.051 nvme0n1 00:30:34.051 23:13:01 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:34.051 23:13:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:34.051 23:13:01 -- common/autotest_common.sh@10 -- # set +x 00:30:34.051 23:13:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:34.051 23:13:01 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:34.051 23:13:01 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:34.051 Running I/O for 2 seconds... 00:30:34.051 [2024-06-09 23:13:02.084718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fda78 00:30:34.051 [2024-06-09 23:13:02.085353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.051 [2024-06-09 23:13:02.085384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:34.051 [2024-06-09 23:13:02.096772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.051 [2024-06-09 23:13:02.097228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.051 [2024-06-09 23:13:02.097251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.051 [2024-06-09 23:13:02.108571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.051 [2024-06-09 23:13:02.108990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.051 [2024-06-09 23:13:02.109011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.051 [2024-06-09 23:13:02.120387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.051 [2024-06-09 23:13:02.120712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.051 [2024-06-09 23:13:02.120733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.051 [2024-06-09 23:13:02.132254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.051 [2024-06-09 23:13:02.132578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.051 [2024-06-09 23:13:02.132598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.051 [2024-06-09 23:13:02.144246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.051 [2024-06-09 23:13:02.144730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.051 [2024-06-09 23:13:02.144750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.051 [2024-06-09 23:13:02.155977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.051 [2024-06-09 23:13:02.156300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.051 [2024-06-09 23:13:02.156320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.051 [2024-06-09 23:13:02.167818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.051 [2024-06-09 23:13:02.168257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.051 [2024-06-09 23:13:02.168277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.051 [2024-06-09 23:13:02.179622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.051 [2024-06-09 23:13:02.180081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.051 [2024-06-09 23:13:02.180101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.051 [2024-06-09 23:13:02.191449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.051 [2024-06-09 23:13:02.191773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.051 [2024-06-09 23:13:02.191792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.051 [2024-06-09 23:13:02.203218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.051 [2024-06-09 23:13:02.203542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.051 [2024-06-09 23:13:02.203562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.051 [2024-06-09 23:13:02.214998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.051 [2024-06-09 23:13:02.215513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.051 [2024-06-09 23:13:02.215532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.051 [2024-06-09 23:13:02.226739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.051 [2024-06-09 23:13:02.227197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.051 [2024-06-09 23:13:02.227217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.238457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.238995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.239015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.250219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.250704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.250724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.261938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.262370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.262390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.273668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.274138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.274158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.285438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.285757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.285776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.297166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.297514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.297534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.308900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.309245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.309264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.320647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.321083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.321102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.332396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.332813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.332833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.344139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.344585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.344607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.355918] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.356382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.356405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.367659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.368062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.368081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.379407] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.379737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.379756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.391244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.391716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.391736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.402969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.403413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.403433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.414733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.415174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.415193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.426479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.426947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.426966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.438175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.438611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.438630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.449941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.450281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.450300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.461656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.462066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.462085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.473376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.473818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.473838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.312 [2024-06-09 23:13:02.485175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.312 [2024-06-09 23:13:02.485486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.312 [2024-06-09 23:13:02.485506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.573 [2024-06-09 23:13:02.496938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.573 [2024-06-09 23:13:02.497389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.573 [2024-06-09 23:13:02.497412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.573 [2024-06-09 23:13:02.508658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.573 [2024-06-09 23:13:02.509112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.573 [2024-06-09 23:13:02.509132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.573 [2024-06-09 23:13:02.520418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.573 [2024-06-09 23:13:02.520883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.573 [2024-06-09 23:13:02.520902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.573 [2024-06-09 23:13:02.532093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.573 [2024-06-09 23:13:02.532498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.573 [2024-06-09 23:13:02.532517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.573 [2024-06-09 23:13:02.543806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.573 [2024-06-09 23:13:02.544134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.573 [2024-06-09 23:13:02.544154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.573 [2024-06-09 23:13:02.555561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.573 [2024-06-09 23:13:02.555874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.573 [2024-06-09 23:13:02.555893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.573 [2024-06-09 23:13:02.567293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.573 [2024-06-09 23:13:02.567707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.573 [2024-06-09 23:13:02.567727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.573 [2024-06-09 23:13:02.579026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.573 [2024-06-09 23:13:02.579487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.573 [2024-06-09 23:13:02.579507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.573 [2024-06-09 23:13:02.590742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.573 [2024-06-09 23:13:02.591196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.573 [2024-06-09 23:13:02.591215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.574 [2024-06-09 23:13:02.602512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.574 [2024-06-09 23:13:02.602938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.574 [2024-06-09 23:13:02.602957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.574 [2024-06-09 23:13:02.614252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.574 [2024-06-09 23:13:02.614690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.574 [2024-06-09 23:13:02.614711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.574 [2024-06-09 23:13:02.626032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.574 [2024-06-09 23:13:02.626472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.574 [2024-06-09 23:13:02.626492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.574 [2024-06-09 23:13:02.637724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.574 [2024-06-09 23:13:02.638185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.574 [2024-06-09 23:13:02.638205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.574 [2024-06-09 23:13:02.649505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.574 [2024-06-09 23:13:02.649946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.574 [2024-06-09 23:13:02.649969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.574 [2024-06-09 23:13:02.661199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.574 [2024-06-09 23:13:02.661625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.574 [2024-06-09 23:13:02.661645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.574 [2024-06-09 23:13:02.672964] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.574 [2024-06-09 23:13:02.673395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.574 [2024-06-09 23:13:02.673429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.574 [2024-06-09 23:13:02.684770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.574 [2024-06-09 23:13:02.685102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.574 [2024-06-09 23:13:02.685122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.574 [2024-06-09 23:13:02.696526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.574 [2024-06-09 23:13:02.696964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.574 [2024-06-09 23:13:02.696983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.574 [2024-06-09 23:13:02.708315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.574 [2024-06-09 23:13:02.708622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.574 [2024-06-09 23:13:02.708642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.574 [2024-06-09 23:13:02.720059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.574 [2024-06-09 23:13:02.720383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.574 [2024-06-09 23:13:02.720406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.574 [2024-06-09 23:13:02.731776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.574 [2024-06-09 23:13:02.732247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.574 [2024-06-09 23:13:02.732267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.574 [2024-06-09 23:13:02.743511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.574 [2024-06-09 23:13:02.744061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.574 [2024-06-09 23:13:02.744080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.755291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.755728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.755748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.767022] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.767473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.767492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.778757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.779155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.779175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.790460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.790900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.790920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.802196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.802645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.802665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.813935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.814258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.814278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.825710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.826059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.826079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.837457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.837881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.837901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.849212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.849653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.849672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.860944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.861388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.861412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.872728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.873158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.873177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.884496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.884861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.884881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.896212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.896532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.896552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.908055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.908423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.908443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.919824] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.920255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.920275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.931600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.931998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.932018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.943318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.943778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.943797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.955091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.955427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.955451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.966894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.967321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.967340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.978624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.979050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.979070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:02.990377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:02.990716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:02.990736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:34.836 [2024-06-09 23:13:03.002106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:34.836 [2024-06-09 23:13:03.002520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:34.836 [2024-06-09 23:13:03.002540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.098 [2024-06-09 23:13:03.013912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.098 [2024-06-09 23:13:03.014404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.098 [2024-06-09 23:13:03.014424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.098 [2024-06-09 23:13:03.025719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.098 [2024-06-09 23:13:03.026062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.098 [2024-06-09 23:13:03.026082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.098 [2024-06-09 23:13:03.037420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.098 [2024-06-09 23:13:03.037852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.098 [2024-06-09 23:13:03.037914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.098 [2024-06-09 23:13:03.049426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.098 [2024-06-09 23:13:03.049851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.098 [2024-06-09 23:13:03.049870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.098 [2024-06-09 23:13:03.061115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.098 [2024-06-09 23:13:03.061443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.098 [2024-06-09 23:13:03.061466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.098 [2024-06-09 23:13:03.072874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.098 [2024-06-09 23:13:03.073322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.098 [2024-06-09 23:13:03.073341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.098 [2024-06-09 23:13:03.084641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.098 [2024-06-09 23:13:03.085011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.098 [2024-06-09 23:13:03.085030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.098 [2024-06-09 23:13:03.096373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.098 [2024-06-09 23:13:03.096730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.098 [2024-06-09 23:13:03.096749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.099 [2024-06-09 23:13:03.108080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.099 [2024-06-09 23:13:03.108394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.099 [2024-06-09 23:13:03.108417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.099 [2024-06-09 23:13:03.119826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.099 [2024-06-09 23:13:03.120275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.099 [2024-06-09 23:13:03.120295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.099 [2024-06-09 23:13:03.131586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.099 [2024-06-09 23:13:03.131895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.099 [2024-06-09 23:13:03.131915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.099 [2024-06-09 23:13:03.143388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.099 [2024-06-09 23:13:03.143779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.099 [2024-06-09 23:13:03.143798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.099 [2024-06-09 23:13:03.155215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.099 [2024-06-09 23:13:03.155678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.099 [2024-06-09 23:13:03.155698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.099 [2024-06-09 23:13:03.166927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.099 [2024-06-09 23:13:03.167465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.099 [2024-06-09 23:13:03.167485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.099 [2024-06-09 23:13:03.178760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.099 [2024-06-09 23:13:03.179105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.099 [2024-06-09 23:13:03.179124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.099 [2024-06-09 23:13:03.190517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.099 [2024-06-09 23:13:03.190842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.099 [2024-06-09 23:13:03.190861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.099 [2024-06-09 23:13:03.202205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.099 [2024-06-09 23:13:03.202558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.099 [2024-06-09 23:13:03.202578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.099 [2024-06-09 23:13:03.213955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.099 [2024-06-09 23:13:03.214264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.099 [2024-06-09 23:13:03.214284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.099 [2024-06-09 23:13:03.225738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.099 [2024-06-09 23:13:03.226088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.099 [2024-06-09 23:13:03.226108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.099 [2024-06-09 23:13:03.237468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.099 [2024-06-09 23:13:03.237793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.099 [2024-06-09 23:13:03.237812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.099 [2024-06-09 23:13:03.249281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.099 [2024-06-09 23:13:03.249614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.099 [2024-06-09 23:13:03.249634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.099 [2024-06-09 23:13:03.261027] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.099 [2024-06-09 23:13:03.261475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.099 [2024-06-09 23:13:03.261495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.099 [2024-06-09 23:13:03.272780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.099 [2024-06-09 23:13:03.273206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.099 [2024-06-09 23:13:03.273225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.361 [2024-06-09 23:13:03.284554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.361 [2024-06-09 23:13:03.284891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.361 [2024-06-09 23:13:03.284911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.361 [2024-06-09 23:13:03.296250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.361 [2024-06-09 23:13:03.296664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.361 [2024-06-09 23:13:03.296683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.361 [2024-06-09 23:13:03.308027] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.361 [2024-06-09 23:13:03.308354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.361 [2024-06-09 23:13:03.308373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.361 [2024-06-09 23:13:03.319806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.361 [2024-06-09 23:13:03.320122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.361 [2024-06-09 23:13:03.320142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.361 [2024-06-09 23:13:03.331542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.361 [2024-06-09 23:13:03.331980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.361 [2024-06-09 23:13:03.331999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.361 [2024-06-09 23:13:03.343319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.361 [2024-06-09 23:13:03.343730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.361 [2024-06-09 23:13:03.343750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.361 [2024-06-09 23:13:03.355064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.361 [2024-06-09 23:13:03.355375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.361 [2024-06-09 23:13:03.355394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.361 [2024-06-09 23:13:03.366827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.361 [2024-06-09 23:13:03.367164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.361 [2024-06-09 23:13:03.367186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.361 [2024-06-09 23:13:03.378603] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.361 [2024-06-09 23:13:03.379039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.361 [2024-06-09 23:13:03.379059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.361 [2024-06-09 23:13:03.390454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.361 [2024-06-09 23:13:03.390929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.361 [2024-06-09 23:13:03.390949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.361 [2024-06-09 23:13:03.402128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.361 [2024-06-09 23:13:03.402467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.361 [2024-06-09 23:13:03.402487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.361 [2024-06-09 23:13:03.413860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.361 [2024-06-09 23:13:03.414294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.361 [2024-06-09 23:13:03.414313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.361 [2024-06-09 23:13:03.425573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.361 [2024-06-09 23:13:03.426003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.361 [2024-06-09 23:13:03.426023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.361 [2024-06-09 23:13:03.437335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.361 [2024-06-09 23:13:03.437768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.361 [2024-06-09 23:13:03.437787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.362 [2024-06-09 23:13:03.449099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.362 [2024-06-09 23:13:03.449426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.362 [2024-06-09 23:13:03.449445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.362 [2024-06-09 23:13:03.460804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.362 [2024-06-09 23:13:03.461246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.362 [2024-06-09 23:13:03.461265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.362 [2024-06-09 23:13:03.472562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.362 [2024-06-09 23:13:03.473023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.362 [2024-06-09 23:13:03.473043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.362 [2024-06-09 23:13:03.484310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.362 [2024-06-09 23:13:03.484749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.362 [2024-06-09 23:13:03.484769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.362 [2024-06-09 23:13:03.496065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.362 [2024-06-09 23:13:03.496421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.362 [2024-06-09 23:13:03.496440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.362 [2024-06-09 23:13:03.507815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.362 [2024-06-09 23:13:03.508147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.362 [2024-06-09 23:13:03.508167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.362 [2024-06-09 23:13:03.519540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.362 [2024-06-09 23:13:03.519988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.362 [2024-06-09 23:13:03.520007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.362 [2024-06-09 23:13:03.531270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.362 [2024-06-09 23:13:03.531725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.362 [2024-06-09 23:13:03.531745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.623 [2024-06-09 23:13:03.542992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.623 [2024-06-09 23:13:03.543313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.623 [2024-06-09 23:13:03.543333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.623 [2024-06-09 23:13:03.554710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.623 [2024-06-09 23:13:03.555021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.623 [2024-06-09 23:13:03.555041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.623 [2024-06-09 23:13:03.566437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.623 [2024-06-09 23:13:03.566889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.623 [2024-06-09 23:13:03.566908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.623 [2024-06-09 23:13:03.578152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.623 [2024-06-09 23:13:03.578594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.623 [2024-06-09 23:13:03.578614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.623 [2024-06-09 23:13:03.589891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.623 [2024-06-09 23:13:03.590206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.623 [2024-06-09 23:13:03.590225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.623 [2024-06-09 23:13:03.601617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.623 [2024-06-09 23:13:03.601969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.623 [2024-06-09 23:13:03.601988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.623 [2024-06-09 23:13:03.613317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.623 [2024-06-09 23:13:03.613745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.623 [2024-06-09 23:13:03.613764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.623 [2024-06-09 23:13:03.625073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.623 [2024-06-09 23:13:03.625398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.623 [2024-06-09 23:13:03.625421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.623 [2024-06-09 23:13:03.636827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.623 [2024-06-09 23:13:03.637333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.623 [2024-06-09 23:13:03.637353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.623 [2024-06-09 23:13:03.648509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.624 [2024-06-09 23:13:03.648974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.624 [2024-06-09 23:13:03.648993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.624 [2024-06-09 23:13:03.660297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.624 [2024-06-09 23:13:03.660658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.624 [2024-06-09 23:13:03.660677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.624 [2024-06-09 23:13:03.672000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.624 [2024-06-09 23:13:03.672327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.624 [2024-06-09 23:13:03.672349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.624 [2024-06-09 23:13:03.683753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.624 [2024-06-09 23:13:03.684162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.624 [2024-06-09 23:13:03.684182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.624 [2024-06-09 23:13:03.695522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.624 [2024-06-09 23:13:03.695832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.624 [2024-06-09 23:13:03.695851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.624 [2024-06-09 23:13:03.707218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.624 [2024-06-09 23:13:03.707532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.624 [2024-06-09 23:13:03.707552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.624 [2024-06-09 23:13:03.718898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.624 [2024-06-09 23:13:03.719342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.624 [2024-06-09 23:13:03.719362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.624 [2024-06-09 23:13:03.730643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.624 [2024-06-09 23:13:03.731112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.624 [2024-06-09 23:13:03.731131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.624 [2024-06-09 23:13:03.742343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.624 [2024-06-09 23:13:03.742812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.624 [2024-06-09 23:13:03.742831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.624 [2024-06-09 23:13:03.754102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.624 [2024-06-09 23:13:03.754558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.624 [2024-06-09 23:13:03.754577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.624 [2024-06-09 23:13:03.765799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.624 [2024-06-09 23:13:03.766241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.624 [2024-06-09 23:13:03.766261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.624 [2024-06-09 23:13:03.777532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.624 [2024-06-09 23:13:03.778008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.624 [2024-06-09 23:13:03.778028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.624 [2024-06-09 23:13:03.789251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.624 [2024-06-09 23:13:03.789584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.624 [2024-06-09 23:13:03.789604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.624 [2024-06-09 23:13:03.800996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.885 [2024-06-09 23:13:03.801441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.885 [2024-06-09 23:13:03.801461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.885 [2024-06-09 23:13:03.812741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.813057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.813076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:03.824482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.824947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.824966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:03.836208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.836532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.836551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:03.847953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.848390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.848412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:03.859723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.860141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.860161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:03.871487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.871799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.871819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:03.883232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.883750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.883769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:03.894937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.895371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.895390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:03.906720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.907038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.907058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:03.918428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.918771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.918791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:03.930085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.930548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.930567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:03.941841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.942297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.942317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:03.953582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.953903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.953923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:03.965302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.965818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.965838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:03.977003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.977339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.977361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:03.988679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:03.989029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:03.989048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:04.000507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:04.000878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:04.000897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:04.012277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:04.012586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:04.012606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:04.024009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:04.024342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:04.024362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:04.035752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:04.036064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:04.036083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:04.047651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:04.047973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:04.047993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:35.886 [2024-06-09 23:13:04.059383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d770) with pdu=0x2000190fcdd0 00:30:35.886 [2024-06-09 23:13:04.059875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:35.886 [2024-06-09 23:13:04.059894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:36.147 00:30:36.147 Latency(us) 00:30:36.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.147 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:36.147 nvme0n1 : 2.01 21593.45 84.35 0.00 0.00 5915.93 3017.39 18131.63 00:30:36.147 =================================================================================================================== 00:30:36.147 Total : 21593.45 84.35 0.00 0.00 5915.93 3017.39 18131.63 00:30:36.147 0 00:30:36.147 23:13:04 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:36.147 23:13:04 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:36.147 23:13:04 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:36.147 | .driver_specific 00:30:36.147 | .nvme_error 00:30:36.147 | .status_code 00:30:36.147 | .command_transient_transport_error' 00:30:36.147 23:13:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:36.147 23:13:04 -- host/digest.sh@71 -- # (( 169 > 0 )) 00:30:36.147 23:13:04 -- host/digest.sh@73 -- # killprocess 96992 00:30:36.147 23:13:04 -- common/autotest_common.sh@926 -- # '[' -z 96992 ']' 00:30:36.147 23:13:04 -- common/autotest_common.sh@930 -- # kill -0 96992 00:30:36.147 23:13:04 -- common/autotest_common.sh@931 -- # uname 00:30:36.147 23:13:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:36.147 23:13:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96992 00:30:36.147 23:13:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:36.147 23:13:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:36.147 23:13:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96992' 00:30:36.147 killing process with pid 96992 00:30:36.147 23:13:04 -- common/autotest_common.sh@945 -- # kill 96992 00:30:36.147 Received shutdown signal, test time was about 2.000000 seconds 00:30:36.147 00:30:36.147 Latency(us) 00:30:36.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.148 =================================================================================================================== 00:30:36.148 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:36.148 23:13:04 -- common/autotest_common.sh@950 -- # wait 96992 00:30:36.408 23:13:04 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:30:36.408 23:13:04 -- host/digest.sh@54 -- # local rw bs qd 00:30:36.408 23:13:04 -- host/digest.sh@56 -- # rw=randwrite 00:30:36.408 23:13:04 -- host/digest.sh@56 -- # bs=131072 00:30:36.408 23:13:04 -- host/digest.sh@56 -- # qd=16 00:30:36.408 23:13:04 -- host/digest.sh@58 -- # bperfpid=97708 00:30:36.408 23:13:04 -- host/digest.sh@60 -- # waitforlisten 97708 /var/tmp/bperf.sock 00:30:36.408 23:13:04 -- common/autotest_common.sh@819 -- # '[' -z 97708 ']' 00:30:36.408 23:13:04 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:36.408 23:13:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:36.408 23:13:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:36.408 23:13:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:36.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:36.408 23:13:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:36.408 23:13:04 -- common/autotest_common.sh@10 -- # set +x 00:30:36.408 [2024-06-09 23:13:04.478043] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:36.408 [2024-06-09 23:13:04.478098] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97708 ] 00:30:36.408 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:36.408 Zero copy mechanism will not be used. 00:30:36.409 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.409 [2024-06-09 23:13:04.535228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.669 [2024-06-09 23:13:04.596852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.242 23:13:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:37.242 23:13:05 -- common/autotest_common.sh@852 -- # return 0 00:30:37.242 23:13:05 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:37.242 23:13:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:37.242 23:13:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:37.242 23:13:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:37.242 23:13:05 -- common/autotest_common.sh@10 -- # set +x 00:30:37.242 23:13:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:37.242 23:13:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:37.242 23:13:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:37.815 nvme0n1 00:30:37.815 23:13:05 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:37.815 23:13:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:37.815 23:13:05 -- common/autotest_common.sh@10 -- # set +x 00:30:37.815 23:13:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:37.815 23:13:05 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:37.815 23:13:05 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:37.815 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:37.815 Zero copy mechanism will not be used. 00:30:37.815 Running I/O for 2 seconds... 00:30:37.815 [2024-06-09 23:13:05.865740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:37.815 [2024-06-09 23:13:05.866576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.815 [2024-06-09 23:13:05.866609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:37.815 [2024-06-09 23:13:05.883117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:37.815 [2024-06-09 23:13:05.883570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.815 [2024-06-09 23:13:05.883592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:37.815 [2024-06-09 23:13:05.900713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:37.815 [2024-06-09 23:13:05.901055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.815 [2024-06-09 23:13:05.901075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:37.815 [2024-06-09 23:13:05.915792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:37.815 [2024-06-09 23:13:05.916201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.815 [2024-06-09 23:13:05.916222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.815 [2024-06-09 23:13:05.932844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:37.815 [2024-06-09 23:13:05.933085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.815 [2024-06-09 23:13:05.933105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:37.815 [2024-06-09 23:13:05.948279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:37.815 [2024-06-09 23:13:05.948559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.815 [2024-06-09 23:13:05.948579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:37.815 [2024-06-09 23:13:05.964376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:37.815 [2024-06-09 23:13:05.964647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.815 [2024-06-09 23:13:05.964667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:37.815 [2024-06-09 23:13:05.980096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:37.815 [2024-06-09 23:13:05.980679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.815 [2024-06-09 23:13:05.980701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.077 [2024-06-09 23:13:05.996505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.077 [2024-06-09 23:13:05.996884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.077 [2024-06-09 23:13:05.996905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.077 [2024-06-09 23:13:06.012338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.077 [2024-06-09 23:13:06.012892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.077 [2024-06-09 23:13:06.012912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.077 [2024-06-09 23:13:06.029582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.077 [2024-06-09 23:13:06.029998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.077 [2024-06-09 23:13:06.030018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.077 [2024-06-09 23:13:06.045945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.077 [2024-06-09 23:13:06.046391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.077 [2024-06-09 23:13:06.046416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.077 [2024-06-09 23:13:06.063484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.077 [2024-06-09 23:13:06.063899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.077 [2024-06-09 23:13:06.063918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.077 [2024-06-09 23:13:06.080680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.077 [2024-06-09 23:13:06.081079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.077 [2024-06-09 23:13:06.081099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.077 [2024-06-09 23:13:06.096576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.077 [2024-06-09 23:13:06.096923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.077 [2024-06-09 23:13:06.096947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.077 [2024-06-09 23:13:06.114213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.077 [2024-06-09 23:13:06.114481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.077 [2024-06-09 23:13:06.114501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.077 [2024-06-09 23:13:06.131477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.077 [2024-06-09 23:13:06.132001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.077 [2024-06-09 23:13:06.132021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.077 [2024-06-09 23:13:06.148213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.077 [2024-06-09 23:13:06.148602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.078 [2024-06-09 23:13:06.148622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.078 [2024-06-09 23:13:06.163783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.078 [2024-06-09 23:13:06.164225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.078 [2024-06-09 23:13:06.164246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.078 [2024-06-09 23:13:06.178672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.078 [2024-06-09 23:13:06.178953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.078 [2024-06-09 23:13:06.178973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.078 [2024-06-09 23:13:06.193997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.078 [2024-06-09 23:13:06.194343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.078 [2024-06-09 23:13:06.194363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.078 [2024-06-09 23:13:06.210036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.078 [2024-06-09 23:13:06.210555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.078 [2024-06-09 23:13:06.210576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.078 [2024-06-09 23:13:06.228714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.078 [2024-06-09 23:13:06.229117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.078 [2024-06-09 23:13:06.229137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.078 [2024-06-09 23:13:06.246547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.078 [2024-06-09 23:13:06.246878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.078 [2024-06-09 23:13:06.246898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.340 [2024-06-09 23:13:06.262232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.340 [2024-06-09 23:13:06.262730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.340 [2024-06-09 23:13:06.262751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.340 [2024-06-09 23:13:06.279796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.340 [2024-06-09 23:13:06.280256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.340 [2024-06-09 23:13:06.280277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.340 [2024-06-09 23:13:06.297245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.340 [2024-06-09 23:13:06.297901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.340 [2024-06-09 23:13:06.297921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.340 [2024-06-09 23:13:06.314654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.340 [2024-06-09 23:13:06.315137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.340 [2024-06-09 23:13:06.315157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.340 [2024-06-09 23:13:06.330311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.340 [2024-06-09 23:13:06.330700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.340 [2024-06-09 23:13:06.330721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.340 [2024-06-09 23:13:06.346414] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.340 [2024-06-09 23:13:06.346812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.340 [2024-06-09 23:13:06.346832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.340 [2024-06-09 23:13:06.363991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.340 [2024-06-09 23:13:06.364343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.340 [2024-06-09 23:13:06.364363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.340 [2024-06-09 23:13:06.380421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.340 [2024-06-09 23:13:06.381011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.340 [2024-06-09 23:13:06.381031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.340 [2024-06-09 23:13:06.399038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.340 [2024-06-09 23:13:06.399488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.340 [2024-06-09 23:13:06.399509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.340 [2024-06-09 23:13:06.416965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.340 [2024-06-09 23:13:06.417429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.340 [2024-06-09 23:13:06.417449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.340 [2024-06-09 23:13:06.433466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.340 [2024-06-09 23:13:06.433705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.340 [2024-06-09 23:13:06.433723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.340 [2024-06-09 23:13:06.449833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.340 [2024-06-09 23:13:06.450242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.340 [2024-06-09 23:13:06.450263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.340 [2024-06-09 23:13:06.463874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.340 [2024-06-09 23:13:06.464159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.340 [2024-06-09 23:13:06.464179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.340 [2024-06-09 23:13:06.478322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.340 [2024-06-09 23:13:06.478827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.341 [2024-06-09 23:13:06.478848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.341 [2024-06-09 23:13:06.493483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.341 [2024-06-09 23:13:06.493756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.341 [2024-06-09 23:13:06.493777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.341 [2024-06-09 23:13:06.508568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.341 [2024-06-09 23:13:06.509104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.341 [2024-06-09 23:13:06.509124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.526312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.526822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.526845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.542044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.542514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.542534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.559679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.560146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.560167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.577796] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.578212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.578232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.593759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.594059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.594080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.609472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.609832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.609851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.624579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.624970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.624990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.640624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.640952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.640972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.656857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.657378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.657398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.674060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.674509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.674530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.691028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.691396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.691421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.708216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.708631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.708651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.724864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.725151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.725171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.740931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.741362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.741382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.759093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.759579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.759599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.603 [2024-06-09 23:13:06.776872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.603 [2024-06-09 23:13:06.777227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.603 [2024-06-09 23:13:06.777247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.865 [2024-06-09 23:13:06.794381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.865 [2024-06-09 23:13:06.794714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.865 [2024-06-09 23:13:06.794735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.865 [2024-06-09 23:13:06.811448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.865 [2024-06-09 23:13:06.811977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.865 [2024-06-09 23:13:06.811997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.865 [2024-06-09 23:13:06.828118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.865 [2024-06-09 23:13:06.828361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.865 [2024-06-09 23:13:06.828382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.865 [2024-06-09 23:13:06.844829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.865 [2024-06-09 23:13:06.845349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.865 [2024-06-09 23:13:06.845369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.865 [2024-06-09 23:13:06.862532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.865 [2024-06-09 23:13:06.862838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.865 [2024-06-09 23:13:06.862858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.865 [2024-06-09 23:13:06.878835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.865 [2024-06-09 23:13:06.879109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.865 [2024-06-09 23:13:06.879128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.865 [2024-06-09 23:13:06.895891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.865 [2024-06-09 23:13:06.896317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.865 [2024-06-09 23:13:06.896338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.865 [2024-06-09 23:13:06.912190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.865 [2024-06-09 23:13:06.912494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.865 [2024-06-09 23:13:06.912513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.865 [2024-06-09 23:13:06.926855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.865 [2024-06-09 23:13:06.927195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.865 [2024-06-09 23:13:06.927215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.866 [2024-06-09 23:13:06.942170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.866 [2024-06-09 23:13:06.942602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.866 [2024-06-09 23:13:06.942622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.866 [2024-06-09 23:13:06.957342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.866 [2024-06-09 23:13:06.957766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.866 [2024-06-09 23:13:06.957789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.866 [2024-06-09 23:13:06.971695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.866 [2024-06-09 23:13:06.971970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.866 [2024-06-09 23:13:06.971989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:38.866 [2024-06-09 23:13:06.986454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.866 [2024-06-09 23:13:06.986751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.866 [2024-06-09 23:13:06.986770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:38.866 [2024-06-09 23:13:07.002343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.866 [2024-06-09 23:13:07.002744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.866 [2024-06-09 23:13:07.002764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:38.866 [2024-06-09 23:13:07.017387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.866 [2024-06-09 23:13:07.017978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.866 [2024-06-09 23:13:07.017998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:38.866 [2024-06-09 23:13:07.033549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:38.866 [2024-06-09 23:13:07.033942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:38.866 [2024-06-09 23:13:07.033962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.049658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.049938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.049959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.065687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.065982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.066001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.082111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.082456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.082476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.097860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.098265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.098284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.113597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.114164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.114184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.130514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.130760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.130779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.147608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.147933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.147952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.165093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.165525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.165545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.182080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.182419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.182439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.197830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.198248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.198268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.214919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.215446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.215465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.231343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.231748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.231768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.248783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.249202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.249222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.265232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.265648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.265668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.281834] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.282282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.282301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.128 [2024-06-09 23:13:07.297907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.128 [2024-06-09 23:13:07.298195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.128 [2024-06-09 23:13:07.298215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.390 [2024-06-09 23:13:07.313638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.390 [2024-06-09 23:13:07.313918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.390 [2024-06-09 23:13:07.313938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.390 [2024-06-09 23:13:07.327574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.391 [2024-06-09 23:13:07.327911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.391 [2024-06-09 23:13:07.327931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.391 [2024-06-09 23:13:07.343733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.391 [2024-06-09 23:13:07.344191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.391 [2024-06-09 23:13:07.344210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.391 [2024-06-09 23:13:07.361078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.391 [2024-06-09 23:13:07.361436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.391 [2024-06-09 23:13:07.361456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.391 [2024-06-09 23:13:07.377257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.391 [2024-06-09 23:13:07.377613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.391 [2024-06-09 23:13:07.377636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.391 [2024-06-09 23:13:07.393755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.391 [2024-06-09 23:13:07.394044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.391 [2024-06-09 23:13:07.394064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.391 [2024-06-09 23:13:07.409048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.391 [2024-06-09 23:13:07.409537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.391 [2024-06-09 23:13:07.409557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.391 [2024-06-09 23:13:07.425413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.391 [2024-06-09 23:13:07.425699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.391 [2024-06-09 23:13:07.425717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.391 [2024-06-09 23:13:07.440761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.391 [2024-06-09 23:13:07.441010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.391 [2024-06-09 23:13:07.441030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.391 [2024-06-09 23:13:07.456331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.391 [2024-06-09 23:13:07.456663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.391 [2024-06-09 23:13:07.456683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.391 [2024-06-09 23:13:07.471945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.391 [2024-06-09 23:13:07.472348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.391 [2024-06-09 23:13:07.472368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.391 [2024-06-09 23:13:07.488113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.391 [2024-06-09 23:13:07.488527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.391 [2024-06-09 23:13:07.488547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.391 [2024-06-09 23:13:07.506643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.391 [2024-06-09 23:13:07.506965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.391 [2024-06-09 23:13:07.506985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.391 [2024-06-09 23:13:07.522938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.391 [2024-06-09 23:13:07.523409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.391 [2024-06-09 23:13:07.523430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.391 [2024-06-09 23:13:07.539726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.391 [2024-06-09 23:13:07.540028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.391 [2024-06-09 23:13:07.540048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.391 [2024-06-09 23:13:07.556768] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.391 [2024-06-09 23:13:07.557191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.391 [2024-06-09 23:13:07.557212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.653 [2024-06-09 23:13:07.573128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.653 [2024-06-09 23:13:07.573418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-09 23:13:07.573439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.653 [2024-06-09 23:13:07.590614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.653 [2024-06-09 23:13:07.590932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-09 23:13:07.590953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.653 [2024-06-09 23:13:07.607246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.653 [2024-06-09 23:13:07.607839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-09 23:13:07.607859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.653 [2024-06-09 23:13:07.624987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.653 [2024-06-09 23:13:07.625374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-09 23:13:07.625393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.653 [2024-06-09 23:13:07.643268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.653 [2024-06-09 23:13:07.643801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-09 23:13:07.643820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.653 [2024-06-09 23:13:07.659609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.653 [2024-06-09 23:13:07.660049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-09 23:13:07.660068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.653 [2024-06-09 23:13:07.675993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.653 [2024-06-09 23:13:07.676418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-09 23:13:07.676438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.653 [2024-06-09 23:13:07.692643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.653 [2024-06-09 23:13:07.693026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-09 23:13:07.693047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.653 [2024-06-09 23:13:07.710431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.653 [2024-06-09 23:13:07.710710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-09 23:13:07.710730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.653 [2024-06-09 23:13:07.726150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.653 [2024-06-09 23:13:07.726665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-09 23:13:07.726685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.653 [2024-06-09 23:13:07.743327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.653 [2024-06-09 23:13:07.743733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-09 23:13:07.743753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.653 [2024-06-09 23:13:07.760639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.653 [2024-06-09 23:13:07.761311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-09 23:13:07.761332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.653 [2024-06-09 23:13:07.778770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.653 [2024-06-09 23:13:07.779242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-09 23:13:07.779263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.653 [2024-06-09 23:13:07.795434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.653 [2024-06-09 23:13:07.795823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.653 [2024-06-09 23:13:07.795843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.653 [2024-06-09 23:13:07.813337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.654 [2024-06-09 23:13:07.813708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-09 23:13:07.813730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.654 [2024-06-09 23:13:07.829133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf3d910) with pdu=0x2000190fef90 00:30:39.654 [2024-06-09 23:13:07.829481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.654 [2024-06-09 23:13:07.829501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:39.915 00:30:39.915 Latency(us) 00:30:39.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.915 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:39.915 nvme0n1 : 2.01 1859.49 232.44 0.00 0.00 8583.19 6089.39 28398.93 00:30:39.915 =================================================================================================================== 00:30:39.915 Total : 1859.49 232.44 0.00 0.00 8583.19 6089.39 28398.93 00:30:39.915 0 00:30:39.915 23:13:07 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:39.915 23:13:07 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:39.915 23:13:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:39.915 23:13:07 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:39.915 | .driver_specific 00:30:39.915 | .nvme_error 00:30:39.915 | .status_code 00:30:39.915 | .command_transient_transport_error' 00:30:39.915 23:13:08 -- host/digest.sh@71 -- # (( 120 > 0 )) 00:30:39.915 23:13:08 -- host/digest.sh@73 -- # killprocess 97708 00:30:39.915 23:13:08 -- common/autotest_common.sh@926 -- # '[' -z 97708 ']' 00:30:39.915 23:13:08 -- common/autotest_common.sh@930 -- # kill -0 97708 00:30:39.915 23:13:08 -- common/autotest_common.sh@931 -- # uname 00:30:39.915 23:13:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:39.915 23:13:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97708 00:30:39.915 23:13:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:39.915 23:13:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:39.915 23:13:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97708' 00:30:39.915 killing process with pid 97708 00:30:39.916 23:13:08 -- common/autotest_common.sh@945 -- # kill 97708 00:30:39.916 Received shutdown signal, test time was about 2.000000 seconds 00:30:39.916 00:30:39.916 Latency(us) 00:30:39.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.916 =================================================================================================================== 00:30:39.916 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:39.916 23:13:08 -- common/autotest_common.sh@950 -- # wait 97708 00:30:40.177 23:13:08 -- host/digest.sh@115 -- # killprocess 95348 00:30:40.177 23:13:08 -- common/autotest_common.sh@926 -- # '[' -z 95348 ']' 00:30:40.177 23:13:08 -- common/autotest_common.sh@930 -- # kill -0 95348 00:30:40.177 23:13:08 -- common/autotest_common.sh@931 -- # uname 00:30:40.177 23:13:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:40.177 23:13:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95348 00:30:40.177 23:13:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:40.177 23:13:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:40.177 23:13:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95348' 00:30:40.177 killing process with pid 95348 00:30:40.177 23:13:08 -- common/autotest_common.sh@945 -- # kill 95348 00:30:40.177 23:13:08 -- common/autotest_common.sh@950 -- # wait 95348 00:30:40.439 00:30:40.439 real 0m16.046s 00:30:40.439 user 0m31.608s 00:30:40.439 sys 0m2.871s 00:30:40.439 23:13:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:40.439 23:13:08 -- common/autotest_common.sh@10 -- # set +x 00:30:40.439 ************************************ 00:30:40.439 END TEST nvmf_digest_error 00:30:40.439 ************************************ 00:30:40.439 23:13:08 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:30:40.439 23:13:08 -- host/digest.sh@139 -- # nvmftestfini 00:30:40.439 23:13:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:40.439 23:13:08 -- nvmf/common.sh@116 -- # sync 00:30:40.439 23:13:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:40.439 23:13:08 -- nvmf/common.sh@119 -- # set +e 00:30:40.439 23:13:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:40.439 23:13:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:40.439 rmmod nvme_tcp 00:30:40.439 rmmod nvme_fabrics 00:30:40.439 rmmod nvme_keyring 00:30:40.439 23:13:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:40.439 23:13:08 -- nvmf/common.sh@123 -- # set -e 00:30:40.439 23:13:08 -- nvmf/common.sh@124 -- # return 0 00:30:40.439 23:13:08 -- nvmf/common.sh@477 -- # '[' -n 95348 ']' 00:30:40.439 23:13:08 -- nvmf/common.sh@478 -- # killprocess 95348 00:30:40.439 23:13:08 -- common/autotest_common.sh@926 -- # '[' -z 95348 ']' 00:30:40.439 23:13:08 -- common/autotest_common.sh@930 -- # kill -0 95348 00:30:40.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (95348) - No such process 00:30:40.439 23:13:08 -- common/autotest_common.sh@953 -- # echo 'Process with pid 95348 is not found' 00:30:40.439 Process with pid 95348 is not found 00:30:40.439 23:13:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:40.439 23:13:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:40.439 23:13:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:40.439 23:13:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:40.439 23:13:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:40.439 23:13:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.439 23:13:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:40.439 23:13:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.990 23:13:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:42.990 00:30:42.990 real 0m41.421s 00:30:42.990 user 1m5.236s 00:30:42.990 sys 0m11.096s 00:30:42.990 23:13:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:42.990 23:13:10 -- common/autotest_common.sh@10 -- # set +x 00:30:42.990 ************************************ 00:30:42.990 END TEST nvmf_digest 00:30:42.990 ************************************ 00:30:42.990 23:13:10 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:30:42.990 23:13:10 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:30:42.990 23:13:10 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:30:42.990 23:13:10 -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:42.990 23:13:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:42.990 23:13:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:42.990 23:13:10 -- common/autotest_common.sh@10 -- # set +x 00:30:42.990 ************************************ 00:30:42.990 START TEST nvmf_bdevperf 00:30:42.990 ************************************ 00:30:42.990 23:13:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:42.990 * Looking for test storage... 00:30:42.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:42.990 23:13:10 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.990 23:13:10 -- nvmf/common.sh@7 -- # uname -s 00:30:42.990 23:13:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.990 23:13:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.990 23:13:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.990 23:13:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.990 23:13:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.990 23:13:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.990 23:13:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.990 23:13:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.990 23:13:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.990 23:13:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.990 23:13:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:42.990 23:13:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:42.990 23:13:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.990 23:13:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.990 23:13:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.990 23:13:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.990 23:13:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.990 23:13:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.990 23:13:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.990 23:13:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.991 23:13:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.991 23:13:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.991 23:13:10 -- paths/export.sh@5 -- # export PATH 00:30:42.991 23:13:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.991 23:13:10 -- nvmf/common.sh@46 -- # : 0 00:30:42.991 23:13:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:42.991 23:13:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:42.991 23:13:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:42.991 23:13:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.991 23:13:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.991 23:13:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:42.991 23:13:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:42.991 23:13:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:42.991 23:13:10 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:42.991 23:13:10 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:42.991 23:13:10 -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:42.991 23:13:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:42.991 23:13:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.991 23:13:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:42.991 23:13:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:42.991 23:13:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:42.991 23:13:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.991 23:13:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:42.991 23:13:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.991 23:13:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:42.991 23:13:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:42.991 23:13:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:42.991 23:13:10 -- common/autotest_common.sh@10 -- # set +x 00:30:49.579 23:13:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:49.579 23:13:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:49.579 23:13:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:49.579 23:13:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:49.579 23:13:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:49.579 23:13:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:49.579 23:13:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:49.579 23:13:17 -- nvmf/common.sh@294 -- # net_devs=() 00:30:49.579 23:13:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:49.579 23:13:17 -- nvmf/common.sh@295 -- # e810=() 00:30:49.579 23:13:17 -- nvmf/common.sh@295 -- # local -ga e810 00:30:49.579 23:13:17 -- nvmf/common.sh@296 -- # x722=() 00:30:49.579 23:13:17 -- nvmf/common.sh@296 -- # local -ga x722 00:30:49.579 23:13:17 -- nvmf/common.sh@297 -- # mlx=() 00:30:49.579 23:13:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:49.579 23:13:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:49.579 23:13:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:49.579 23:13:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:49.579 23:13:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:49.579 23:13:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:49.579 23:13:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:49.579 23:13:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:49.579 23:13:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:49.579 23:13:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:49.579 23:13:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:49.579 23:13:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:49.579 23:13:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:49.579 23:13:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:49.579 23:13:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:49.579 23:13:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:49.579 23:13:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:49.579 23:13:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:49.579 23:13:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:49.579 23:13:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:49.579 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:49.579 23:13:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:49.579 23:13:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:49.579 23:13:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.579 23:13:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.579 23:13:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:49.579 23:13:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:49.579 23:13:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:49.579 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:49.580 23:13:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:49.580 23:13:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:49.580 23:13:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:49.580 23:13:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:49.580 23:13:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:49.580 23:13:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:49.580 23:13:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:49.580 23:13:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:49.580 23:13:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:49.580 23:13:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.580 23:13:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:49.580 23:13:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.580 23:13:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:49.580 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:49.580 23:13:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.580 23:13:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:49.580 23:13:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:49.580 23:13:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:49.580 23:13:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:49.580 23:13:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:49.580 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:49.580 23:13:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:49.580 23:13:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:49.580 23:13:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:49.580 23:13:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:49.580 23:13:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:49.580 23:13:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:49.580 23:13:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:49.580 23:13:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:49.580 23:13:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:49.580 23:13:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:49.580 23:13:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:49.580 23:13:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:49.580 23:13:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:49.580 23:13:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:49.580 23:13:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:49.580 23:13:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:49.580 23:13:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:49.580 23:13:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:49.580 23:13:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:49.580 23:13:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:49.580 23:13:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:49.580 23:13:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:49.580 23:13:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.580 23:13:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.580 23:13:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.580 23:13:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:49.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:30:49.580 00:30:49.580 --- 10.0.0.2 ping statistics --- 00:30:49.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.580 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:30:49.580 23:13:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.497 ms 00:30:49.580 00:30:49.580 --- 10.0.0.1 ping statistics --- 00:30:49.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.580 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:30:49.580 23:13:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.580 23:13:17 -- nvmf/common.sh@410 -- # return 0 00:30:49.580 23:13:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:49.580 23:13:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.580 23:13:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:49.580 23:13:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:49.580 23:13:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.580 23:13:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:49.580 23:13:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:49.580 23:13:17 -- host/bdevperf.sh@25 -- # tgt_init 00:30:49.580 23:13:17 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:49.580 23:13:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:49.580 23:13:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:49.580 23:13:17 -- common/autotest_common.sh@10 -- # set +x 00:30:49.580 23:13:17 -- nvmf/common.sh@469 -- # nvmfpid=102723 00:30:49.580 23:13:17 -- nvmf/common.sh@470 -- # waitforlisten 102723 00:30:49.580 23:13:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:49.580 23:13:17 -- common/autotest_common.sh@819 -- # '[' -z 102723 ']' 00:30:49.580 23:13:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.580 23:13:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:49.580 23:13:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.580 23:13:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:49.580 23:13:17 -- common/autotest_common.sh@10 -- # set +x 00:30:49.841 [2024-06-09 23:13:17.800514] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:49.841 [2024-06-09 23:13:17.800597] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.841 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.841 [2024-06-09 23:13:17.871698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:49.841 [2024-06-09 23:13:17.945167] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:49.841 [2024-06-09 23:13:17.945286] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.841 [2024-06-09 23:13:17.945295] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.841 [2024-06-09 23:13:17.945302] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.841 [2024-06-09 23:13:17.945417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:49.841 [2024-06-09 23:13:17.945566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.841 [2024-06-09 23:13:17.945567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.411 23:13:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:50.411 23:13:18 -- common/autotest_common.sh@852 -- # return 0 00:30:50.411 23:13:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:50.411 23:13:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:50.411 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:30:50.671 23:13:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.671 23:13:18 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:50.671 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.671 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:30:50.671 [2024-06-09 23:13:18.597076] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.671 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.671 23:13:18 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:50.671 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.671 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:30:50.671 Malloc0 00:30:50.671 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.671 23:13:18 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:50.671 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.671 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:30:50.671 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.671 23:13:18 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:50.671 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.671 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:30:50.671 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.671 23:13:18 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.671 23:13:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.671 23:13:18 -- common/autotest_common.sh@10 -- # set +x 00:30:50.671 [2024-06-09 23:13:18.665702] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.671 23:13:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.671 23:13:18 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:50.671 23:13:18 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:50.671 23:13:18 -- nvmf/common.sh@520 -- # config=() 00:30:50.671 23:13:18 -- nvmf/common.sh@520 -- # local subsystem config 00:30:50.671 23:13:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:50.671 23:13:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:50.671 { 00:30:50.671 "params": { 00:30:50.671 "name": "Nvme$subsystem", 00:30:50.671 "trtype": "$TEST_TRANSPORT", 00:30:50.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.671 "adrfam": "ipv4", 00:30:50.671 "trsvcid": "$NVMF_PORT", 00:30:50.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.671 "hdgst": ${hdgst:-false}, 00:30:50.671 "ddgst": ${ddgst:-false} 00:30:50.671 }, 00:30:50.671 "method": "bdev_nvme_attach_controller" 00:30:50.671 } 00:30:50.671 EOF 00:30:50.671 )") 00:30:50.671 23:13:18 -- nvmf/common.sh@542 -- # cat 00:30:50.671 23:13:18 -- nvmf/common.sh@544 -- # jq . 00:30:50.671 23:13:18 -- nvmf/common.sh@545 -- # IFS=, 00:30:50.671 23:13:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:50.671 "params": { 00:30:50.671 "name": "Nvme1", 00:30:50.671 "trtype": "tcp", 00:30:50.671 "traddr": "10.0.0.2", 00:30:50.671 "adrfam": "ipv4", 00:30:50.671 "trsvcid": "4420", 00:30:50.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.671 "hdgst": false, 00:30:50.671 "ddgst": false 00:30:50.671 }, 00:30:50.671 "method": "bdev_nvme_attach_controller" 00:30:50.671 }' 00:30:50.671 [2024-06-09 23:13:18.714541] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:50.671 [2024-06-09 23:13:18.714595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102903 ] 00:30:50.671 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.671 [2024-06-09 23:13:18.773636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.671 [2024-06-09 23:13:18.836035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.932 Running I/O for 1 seconds... 00:30:51.875 00:30:51.875 Latency(us) 00:30:51.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.875 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:51.875 Verification LBA range: start 0x0 length 0x4000 00:30:51.875 Nvme1n1 : 1.01 13808.23 53.94 0.00 0.00 9228.47 1256.11 20753.07 00:30:51.875 =================================================================================================================== 00:30:51.875 Total : 13808.23 53.94 0.00 0.00 9228.47 1256.11 20753.07 00:30:52.136 23:13:20 -- host/bdevperf.sh@30 -- # bdevperfpid=103116 00:30:52.136 23:13:20 -- host/bdevperf.sh@32 -- # sleep 3 00:30:52.136 23:13:20 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:52.136 23:13:20 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:52.136 23:13:20 -- nvmf/common.sh@520 -- # config=() 00:30:52.136 23:13:20 -- nvmf/common.sh@520 -- # local subsystem config 00:30:52.136 23:13:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:52.136 23:13:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:52.136 { 00:30:52.136 "params": { 00:30:52.136 "name": "Nvme$subsystem", 00:30:52.136 "trtype": "$TEST_TRANSPORT", 00:30:52.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.136 "adrfam": "ipv4", 00:30:52.136 "trsvcid": "$NVMF_PORT", 00:30:52.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.136 "hdgst": ${hdgst:-false}, 00:30:52.136 "ddgst": ${ddgst:-false} 00:30:52.136 }, 00:30:52.136 "method": "bdev_nvme_attach_controller" 00:30:52.136 } 00:30:52.136 EOF 00:30:52.136 )") 00:30:52.136 23:13:20 -- nvmf/common.sh@542 -- # cat 00:30:52.136 23:13:20 -- nvmf/common.sh@544 -- # jq . 00:30:52.136 23:13:20 -- nvmf/common.sh@545 -- # IFS=, 00:30:52.136 23:13:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:52.136 "params": { 00:30:52.136 "name": "Nvme1", 00:30:52.136 "trtype": "tcp", 00:30:52.136 "traddr": "10.0.0.2", 00:30:52.136 "adrfam": "ipv4", 00:30:52.137 "trsvcid": "4420", 00:30:52.137 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:52.137 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:52.137 "hdgst": false, 00:30:52.137 "ddgst": false 00:30:52.137 }, 00:30:52.137 "method": "bdev_nvme_attach_controller" 00:30:52.137 }' 00:30:52.137 [2024-06-09 23:13:20.218436] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:52.137 [2024-06-09 23:13:20.218491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103116 ] 00:30:52.137 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.137 [2024-06-09 23:13:20.277612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.408 [2024-06-09 23:13:20.338053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.409 Running I/O for 15 seconds... 00:30:55.015 23:13:23 -- host/bdevperf.sh@33 -- # kill -9 102723 00:30:55.015 23:13:23 -- host/bdevperf.sh@35 -- # sleep 3 00:30:55.015 [2024-06-09 23:13:23.178224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:31512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:31632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:31648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:31656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:31680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:31096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.015 [2024-06-09 23:13:23.178965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:31728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.178982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.178991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.015 [2024-06-09 23:13:23.179034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.015 [2024-06-09 23:13:23.179114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:31800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.015 [2024-06-09 23:13:23.179294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:31824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.015 [2024-06-09 23:13:23.179326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.015 [2024-06-09 23:13:23.179342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.015 [2024-06-09 23:13:23.179375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.015 [2024-06-09 23:13:23.179495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.015 [2024-06-09 23:13:23.179529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:31368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:31392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:31400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.015 [2024-06-09 23:13:23.179728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.015 [2024-06-09 23:13:23.179739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.015 [2024-06-09 23:13:23.179747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.179762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.179778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.179795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.179810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.179826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.179842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.179859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.179874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.179890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.179907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.179924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.179940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.179956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.179973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.179988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.179998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:32064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.180022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.180054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:32088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.180070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.180086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:31496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.180270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.180336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.180351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:32168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.180368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.016 [2024-06-09 23:13:23.180422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.016 [2024-06-09 23:13:23.180541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaaaab0 is same with the state(5) to be set 00:30:55.016 [2024-06-09 23:13:23.180558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.016 [2024-06-09 23:13:23.180565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.016 [2024-06-09 23:13:23.180571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31704 len:8 PRP1 0x0 PRP2 0x0 00:30:55.016 [2024-06-09 23:13:23.180579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180619] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaaaab0 was disconnected and freed. reset controller. 00:30:55.016 [2024-06-09 23:13:23.180662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.016 [2024-06-09 23:13:23.180672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.016 [2024-06-09 23:13:23.180689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.016 [2024-06-09 23:13:23.180704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.016 [2024-06-09 23:13:23.180719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.016 [2024-06-09 23:13:23.180726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.016 [2024-06-09 23:13:23.183126] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.016 [2024-06-09 23:13:23.183146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.016 [2024-06-09 23:13:23.183923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.016 [2024-06-09 23:13:23.184648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.016 [2024-06-09 23:13:23.184685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.016 [2024-06-09 23:13:23.184695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.016 [2024-06-09 23:13:23.184844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.016 [2024-06-09 23:13:23.184972] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.016 [2024-06-09 23:13:23.184981] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.016 [2024-06-09 23:13:23.184990] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.016 [2024-06-09 23:13:23.187496] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.278 [2024-06-09 23:13:23.195657] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.278 [2024-06-09 23:13:23.196353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.278 [2024-06-09 23:13:23.196969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.278 [2024-06-09 23:13:23.197007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.278 [2024-06-09 23:13:23.197017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.278 [2024-06-09 23:13:23.197164] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.278 [2024-06-09 23:13:23.197293] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.278 [2024-06-09 23:13:23.197302] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.278 [2024-06-09 23:13:23.197310] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.278 [2024-06-09 23:13:23.199692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.278 [2024-06-09 23:13:23.208367] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.278 [2024-06-09 23:13:23.209065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.278 [2024-06-09 23:13:23.209630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.278 [2024-06-09 23:13:23.209668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.278 [2024-06-09 23:13:23.209678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.278 [2024-06-09 23:13:23.209822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.278 [2024-06-09 23:13:23.209950] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.278 [2024-06-09 23:13:23.209960] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.278 [2024-06-09 23:13:23.209968] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.278 [2024-06-09 23:13:23.212378] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.278 [2024-06-09 23:13:23.220842] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.278 [2024-06-09 23:13:23.221533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.278 [2024-06-09 23:13:23.222069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.278 [2024-06-09 23:13:23.222083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.278 [2024-06-09 23:13:23.222093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.278 [2024-06-09 23:13:23.222273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.278 [2024-06-09 23:13:23.222420] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.278 [2024-06-09 23:13:23.222430] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.278 [2024-06-09 23:13:23.222438] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.278 [2024-06-09 23:13:23.224830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.278 [2024-06-09 23:13:23.233207] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.278 [2024-06-09 23:13:23.233982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.278 [2024-06-09 23:13:23.234511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.278 [2024-06-09 23:13:23.234531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.278 [2024-06-09 23:13:23.234541] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.278 [2024-06-09 23:13:23.234721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.278 [2024-06-09 23:13:23.234831] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.278 [2024-06-09 23:13:23.234841] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.278 [2024-06-09 23:13:23.234848] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.278 [2024-06-09 23:13:23.237069] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.278 [2024-06-09 23:13:23.245633] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.278 [2024-06-09 23:13:23.246286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.278 [2024-06-09 23:13:23.246800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.278 [2024-06-09 23:13:23.246812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.278 [2024-06-09 23:13:23.246820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.278 [2024-06-09 23:13:23.246926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.279 [2024-06-09 23:13:23.247070] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.279 [2024-06-09 23:13:23.247078] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.279 [2024-06-09 23:13:23.247085] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.279 [2024-06-09 23:13:23.249179] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.279 [2024-06-09 23:13:23.258118] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.279 [2024-06-09 23:13:23.258620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.259122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.259133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.279 [2024-06-09 23:13:23.259140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.279 [2024-06-09 23:13:23.259301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.279 [2024-06-09 23:13:23.259467] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.279 [2024-06-09 23:13:23.259476] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.279 [2024-06-09 23:13:23.259482] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.279 [2024-06-09 23:13:23.261774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.279 [2024-06-09 23:13:23.270640] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.279 [2024-06-09 23:13:23.271090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.271604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.271615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.279 [2024-06-09 23:13:23.271626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.279 [2024-06-09 23:13:23.271841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.279 [2024-06-09 23:13:23.272002] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.279 [2024-06-09 23:13:23.272011] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.279 [2024-06-09 23:13:23.272018] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.279 [2024-06-09 23:13:23.274407] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.279 [2024-06-09 23:13:23.283061] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.279 [2024-06-09 23:13:23.283712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.284090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.284101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.279 [2024-06-09 23:13:23.284108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.279 [2024-06-09 23:13:23.284269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.279 [2024-06-09 23:13:23.284436] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.279 [2024-06-09 23:13:23.284445] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.279 [2024-06-09 23:13:23.284452] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.279 [2024-06-09 23:13:23.286547] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.279 [2024-06-09 23:13:23.295836] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.279 [2024-06-09 23:13:23.296677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.297194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.297208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.279 [2024-06-09 23:13:23.297217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.279 [2024-06-09 23:13:23.297398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.279 [2024-06-09 23:13:23.297569] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.279 [2024-06-09 23:13:23.297578] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.279 [2024-06-09 23:13:23.297586] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.279 [2024-06-09 23:13:23.299811] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.279 [2024-06-09 23:13:23.308413] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.279 [2024-06-09 23:13:23.309175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.309622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.309638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.279 [2024-06-09 23:13:23.309647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.279 [2024-06-09 23:13:23.309832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.279 [2024-06-09 23:13:23.309961] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.279 [2024-06-09 23:13:23.309970] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.279 [2024-06-09 23:13:23.309977] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.279 [2024-06-09 23:13:23.312077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.279 [2024-06-09 23:13:23.321289] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.279 [2024-06-09 23:13:23.321872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.322380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.322391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.279 [2024-06-09 23:13:23.322398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.279 [2024-06-09 23:13:23.322537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.279 [2024-06-09 23:13:23.322717] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.279 [2024-06-09 23:13:23.322726] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.279 [2024-06-09 23:13:23.322734] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.279 [2024-06-09 23:13:23.324810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.279 [2024-06-09 23:13:23.333819] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.279 [2024-06-09 23:13:23.334518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.335063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.335077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.279 [2024-06-09 23:13:23.335086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.279 [2024-06-09 23:13:23.335212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.279 [2024-06-09 23:13:23.335358] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.279 [2024-06-09 23:13:23.335366] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.279 [2024-06-09 23:13:23.335374] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.279 [2024-06-09 23:13:23.337754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.279 [2024-06-09 23:13:23.346389] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.279 [2024-06-09 23:13:23.347141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.347632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.347672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.279 [2024-06-09 23:13:23.347684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.279 [2024-06-09 23:13:23.347829] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.279 [2024-06-09 23:13:23.347981] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.279 [2024-06-09 23:13:23.347990] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.279 [2024-06-09 23:13:23.347997] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.279 [2024-06-09 23:13:23.350190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.279 [2024-06-09 23:13:23.359152] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.279 [2024-06-09 23:13:23.359873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.360441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.279 [2024-06-09 23:13:23.360457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.279 [2024-06-09 23:13:23.360466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.279 [2024-06-09 23:13:23.360610] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.279 [2024-06-09 23:13:23.360775] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.279 [2024-06-09 23:13:23.360784] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.279 [2024-06-09 23:13:23.360791] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.279 [2024-06-09 23:13:23.363092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.279 [2024-06-09 23:13:23.371632] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.280 [2024-06-09 23:13:23.372382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.280 [2024-06-09 23:13:23.372944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.280 [2024-06-09 23:13:23.372959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.280 [2024-06-09 23:13:23.372969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.280 [2024-06-09 23:13:23.373148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.280 [2024-06-09 23:13:23.373277] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.280 [2024-06-09 23:13:23.373286] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.280 [2024-06-09 23:13:23.373293] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.280 [2024-06-09 23:13:23.375436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.280 [2024-06-09 23:13:23.384133] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.280 [2024-06-09 23:13:23.384925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.280 [2024-06-09 23:13:23.385621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.280 [2024-06-09 23:13:23.385660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.280 [2024-06-09 23:13:23.385671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.280 [2024-06-09 23:13:23.385797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.280 [2024-06-09 23:13:23.385980] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.280 [2024-06-09 23:13:23.385989] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.280 [2024-06-09 23:13:23.386002] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.280 [2024-06-09 23:13:23.388180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.280 [2024-06-09 23:13:23.396477] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.280 [2024-06-09 23:13:23.397152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.280 [2024-06-09 23:13:23.397737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.280 [2024-06-09 23:13:23.397774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.280 [2024-06-09 23:13:23.397786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.280 [2024-06-09 23:13:23.397948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.280 [2024-06-09 23:13:23.398113] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.280 [2024-06-09 23:13:23.398121] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.280 [2024-06-09 23:13:23.398129] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.280 [2024-06-09 23:13:23.400233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.280 [2024-06-09 23:13:23.408712] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.280 [2024-06-09 23:13:23.409469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.280 [2024-06-09 23:13:23.410027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.280 [2024-06-09 23:13:23.410041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.280 [2024-06-09 23:13:23.410050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.280 [2024-06-09 23:13:23.410194] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.280 [2024-06-09 23:13:23.410341] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.280 [2024-06-09 23:13:23.410350] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.280 [2024-06-09 23:13:23.410358] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.280 [2024-06-09 23:13:23.412610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.280 [2024-06-09 23:13:23.421281] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.280 [2024-06-09 23:13:23.421970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.280 [2024-06-09 23:13:23.422486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.280 [2024-06-09 23:13:23.422501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.280 [2024-06-09 23:13:23.422511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.280 [2024-06-09 23:13:23.422655] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.280 [2024-06-09 23:13:23.422838] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.280 [2024-06-09 23:13:23.422847] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.280 [2024-06-09 23:13:23.422855] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.280 [2024-06-09 23:13:23.425170] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.280 [2024-06-09 23:13:23.433727] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.280 [2024-06-09 23:13:23.434420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.280 [2024-06-09 23:13:23.434954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.280 [2024-06-09 23:13:23.434968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.280 [2024-06-09 23:13:23.434977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.280 [2024-06-09 23:13:23.435139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.280 [2024-06-09 23:13:23.435267] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.280 [2024-06-09 23:13:23.435276] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.280 [2024-06-09 23:13:23.435283] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.280 [2024-06-09 23:13:23.437642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.280 [2024-06-09 23:13:23.446191] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.280 [2024-06-09 23:13:23.446866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.280 [2024-06-09 23:13:23.447428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.280 [2024-06-09 23:13:23.447443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.280 [2024-06-09 23:13:23.447452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.280 [2024-06-09 23:13:23.447633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.280 [2024-06-09 23:13:23.447797] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.280 [2024-06-09 23:13:23.447806] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.280 [2024-06-09 23:13:23.447814] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.280 [2024-06-09 23:13:23.449988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.542 [2024-06-09 23:13:23.458622] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.542 [2024-06-09 23:13:23.459274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.542 [2024-06-09 23:13:23.459876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.542 [2024-06-09 23:13:23.459914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.542 [2024-06-09 23:13:23.459925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.542 [2024-06-09 23:13:23.460069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.542 [2024-06-09 23:13:23.460233] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.542 [2024-06-09 23:13:23.460242] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.542 [2024-06-09 23:13:23.460250] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.542 [2024-06-09 23:13:23.462596] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.542 [2024-06-09 23:13:23.471348] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.542 [2024-06-09 23:13:23.471958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.542 [2024-06-09 23:13:23.472648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.542 [2024-06-09 23:13:23.472686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.542 [2024-06-09 23:13:23.472697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.542 [2024-06-09 23:13:23.472823] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.542 [2024-06-09 23:13:23.472987] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.542 [2024-06-09 23:13:23.472997] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.542 [2024-06-09 23:13:23.473004] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.542 [2024-06-09 23:13:23.475342] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.542 [2024-06-09 23:13:23.483752] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.542 [2024-06-09 23:13:23.484413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.542 [2024-06-09 23:13:23.484906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.542 [2024-06-09 23:13:23.484917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.542 [2024-06-09 23:13:23.484925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.542 [2024-06-09 23:13:23.485104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.542 [2024-06-09 23:13:23.485248] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.542 [2024-06-09 23:13:23.485256] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.542 [2024-06-09 23:13:23.485263] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.542 [2024-06-09 23:13:23.487470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.542 [2024-06-09 23:13:23.496382] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.542 [2024-06-09 23:13:23.497085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.542 [2024-06-09 23:13:23.497633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.542 [2024-06-09 23:13:23.497671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.542 [2024-06-09 23:13:23.497682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.542 [2024-06-09 23:13:23.497880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.542 [2024-06-09 23:13:23.498008] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.542 [2024-06-09 23:13:23.498017] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.542 [2024-06-09 23:13:23.498024] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.542 [2024-06-09 23:13:23.500361] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.542 [2024-06-09 23:13:23.508848] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.542 [2024-06-09 23:13:23.509633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.542 [2024-06-09 23:13:23.510141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.542 [2024-06-09 23:13:23.510155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.542 [2024-06-09 23:13:23.510164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.542 [2024-06-09 23:13:23.510363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.542 [2024-06-09 23:13:23.510535] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.543 [2024-06-09 23:13:23.510545] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.543 [2024-06-09 23:13:23.510553] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.543 [2024-06-09 23:13:23.512708] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.543 [2024-06-09 23:13:23.521351] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.543 [2024-06-09 23:13:23.522083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.522633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.522670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.543 [2024-06-09 23:13:23.522681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.543 [2024-06-09 23:13:23.522898] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.543 [2024-06-09 23:13:23.523026] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.543 [2024-06-09 23:13:23.523035] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.543 [2024-06-09 23:13:23.523042] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.543 [2024-06-09 23:13:23.525281] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.543 [2024-06-09 23:13:23.533828] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.543 [2024-06-09 23:13:23.534501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.535060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.535075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.543 [2024-06-09 23:13:23.535084] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.543 [2024-06-09 23:13:23.535246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.543 [2024-06-09 23:13:23.535392] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.543 [2024-06-09 23:13:23.535410] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.543 [2024-06-09 23:13:23.535418] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.543 [2024-06-09 23:13:23.537703] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.543 [2024-06-09 23:13:23.546345] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.543 [2024-06-09 23:13:23.547112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.547724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.547766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.543 [2024-06-09 23:13:23.547777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.543 [2024-06-09 23:13:23.547994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.543 [2024-06-09 23:13:23.548159] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.543 [2024-06-09 23:13:23.548170] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.543 [2024-06-09 23:13:23.548177] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.543 [2024-06-09 23:13:23.550499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.543 [2024-06-09 23:13:23.558810] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.543 [2024-06-09 23:13:23.559612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.560013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.560029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.543 [2024-06-09 23:13:23.560038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.543 [2024-06-09 23:13:23.560201] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.543 [2024-06-09 23:13:23.560366] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.543 [2024-06-09 23:13:23.560375] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.543 [2024-06-09 23:13:23.560383] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.543 [2024-06-09 23:13:23.562674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.543 [2024-06-09 23:13:23.571103] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.543 [2024-06-09 23:13:23.571825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.572339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.572354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.543 [2024-06-09 23:13:23.572363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.543 [2024-06-09 23:13:23.572552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.543 [2024-06-09 23:13:23.572736] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.543 [2024-06-09 23:13:23.572746] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.543 [2024-06-09 23:13:23.572753] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.543 [2024-06-09 23:13:23.574984] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.543 [2024-06-09 23:13:23.583579] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.543 [2024-06-09 23:13:23.584409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.584962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.584976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.543 [2024-06-09 23:13:23.584993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.543 [2024-06-09 23:13:23.585119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.543 [2024-06-09 23:13:23.585283] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.543 [2024-06-09 23:13:23.585292] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.543 [2024-06-09 23:13:23.585300] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.543 [2024-06-09 23:13:23.587640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.543 [2024-06-09 23:13:23.596096] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.543 [2024-06-09 23:13:23.596830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.597386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.597400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.543 [2024-06-09 23:13:23.597419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.543 [2024-06-09 23:13:23.597581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.543 [2024-06-09 23:13:23.597727] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.543 [2024-06-09 23:13:23.597736] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.543 [2024-06-09 23:13:23.597745] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.543 [2024-06-09 23:13:23.599699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.543 [2024-06-09 23:13:23.608750] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.543 [2024-06-09 23:13:23.609505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.610063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.610078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.543 [2024-06-09 23:13:23.610088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.543 [2024-06-09 23:13:23.610267] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.543 [2024-06-09 23:13:23.610359] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.543 [2024-06-09 23:13:23.610368] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.543 [2024-06-09 23:13:23.610376] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.543 [2024-06-09 23:13:23.612775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.543 [2024-06-09 23:13:23.621088] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.543 [2024-06-09 23:13:23.621696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.622206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.622220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.543 [2024-06-09 23:13:23.622230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.543 [2024-06-09 23:13:23.622377] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.543 [2024-06-09 23:13:23.622475] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.543 [2024-06-09 23:13:23.622484] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.543 [2024-06-09 23:13:23.622491] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.543 [2024-06-09 23:13:23.624710] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.543 [2024-06-09 23:13:23.633460] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.543 [2024-06-09 23:13:23.634153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.543 [2024-06-09 23:13:23.634749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.544 [2024-06-09 23:13:23.634767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.544 [2024-06-09 23:13:23.634776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.544 [2024-06-09 23:13:23.634924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.544 [2024-06-09 23:13:23.635106] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.544 [2024-06-09 23:13:23.635115] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.544 [2024-06-09 23:13:23.635122] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.544 [2024-06-09 23:13:23.637332] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.544 [2024-06-09 23:13:23.645922] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.544 [2024-06-09 23:13:23.646683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.544 [2024-06-09 23:13:23.648195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.544 [2024-06-09 23:13:23.648221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.544 [2024-06-09 23:13:23.648232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.544 [2024-06-09 23:13:23.648377] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.544 [2024-06-09 23:13:23.648530] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.544 [2024-06-09 23:13:23.648540] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.544 [2024-06-09 23:13:23.648548] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.544 [2024-06-09 23:13:23.650630] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.544 [2024-06-09 23:13:23.658430] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.544 [2024-06-09 23:13:23.659094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.544 [2024-06-09 23:13:23.659702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.544 [2024-06-09 23:13:23.659740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.544 [2024-06-09 23:13:23.659751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.544 [2024-06-09 23:13:23.659932] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.544 [2024-06-09 23:13:23.660083] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.544 [2024-06-09 23:13:23.660092] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.544 [2024-06-09 23:13:23.660099] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.544 [2024-06-09 23:13:23.662384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.544 [2024-06-09 23:13:23.670877] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.544 [2024-06-09 23:13:23.671594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.544 [2024-06-09 23:13:23.672149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.544 [2024-06-09 23:13:23.672163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.544 [2024-06-09 23:13:23.672173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.544 [2024-06-09 23:13:23.672334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.544 [2024-06-09 23:13:23.672487] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.544 [2024-06-09 23:13:23.672497] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.544 [2024-06-09 23:13:23.672505] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.544 [2024-06-09 23:13:23.674839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.544 [2024-06-09 23:13:23.683272] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.544 [2024-06-09 23:13:23.683924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.544 [2024-06-09 23:13:23.684408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.544 [2024-06-09 23:13:23.684420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.544 [2024-06-09 23:13:23.684427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.544 [2024-06-09 23:13:23.684533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.544 [2024-06-09 23:13:23.684695] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.544 [2024-06-09 23:13:23.684703] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.544 [2024-06-09 23:13:23.684710] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.544 [2024-06-09 23:13:23.686984] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.544 [2024-06-09 23:13:23.695755] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.544 [2024-06-09 23:13:23.696373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.544 [2024-06-09 23:13:23.696947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.544 [2024-06-09 23:13:23.696985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.544 [2024-06-09 23:13:23.696997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.544 [2024-06-09 23:13:23.697183] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.544 [2024-06-09 23:13:23.697327] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.544 [2024-06-09 23:13:23.697341] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.544 [2024-06-09 23:13:23.697349] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.544 [2024-06-09 23:13:23.699545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.544 [2024-06-09 23:13:23.707980] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.544 [2024-06-09 23:13:23.708691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.544 [2024-06-09 23:13:23.709226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.544 [2024-06-09 23:13:23.709240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.544 [2024-06-09 23:13:23.709250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.544 [2024-06-09 23:13:23.709394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.544 [2024-06-09 23:13:23.709547] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.544 [2024-06-09 23:13:23.709556] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.544 [2024-06-09 23:13:23.709564] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.544 [2024-06-09 23:13:23.711899] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.806 [2024-06-09 23:13:23.720528] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.806 [2024-06-09 23:13:23.721272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.806 [2024-06-09 23:13:23.721833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.806 [2024-06-09 23:13:23.721849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.806 [2024-06-09 23:13:23.721859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.806 [2024-06-09 23:13:23.722021] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.806 [2024-06-09 23:13:23.722149] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.806 [2024-06-09 23:13:23.722158] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.806 [2024-06-09 23:13:23.722165] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.806 [2024-06-09 23:13:23.724425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.806 [2024-06-09 23:13:23.733068] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.806 [2024-06-09 23:13:23.733833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.806 [2024-06-09 23:13:23.734931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.806 [2024-06-09 23:13:23.734958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.806 [2024-06-09 23:13:23.734968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.806 [2024-06-09 23:13:23.735130] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.806 [2024-06-09 23:13:23.735277] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.806 [2024-06-09 23:13:23.735286] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.806 [2024-06-09 23:13:23.735298] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.806 [2024-06-09 23:13:23.737572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.806 [2024-06-09 23:13:23.745727] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.806 [2024-06-09 23:13:23.746365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.806 [2024-06-09 23:13:23.746990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.806 [2024-06-09 23:13:23.747028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.806 [2024-06-09 23:13:23.747039] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.806 [2024-06-09 23:13:23.747201] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.806 [2024-06-09 23:13:23.747366] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.806 [2024-06-09 23:13:23.747375] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.806 [2024-06-09 23:13:23.747383] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.806 [2024-06-09 23:13:23.749596] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.806 [2024-06-09 23:13:23.758391] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.806 [2024-06-09 23:13:23.759060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.806 [2024-06-09 23:13:23.759791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.806 [2024-06-09 23:13:23.759829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.806 [2024-06-09 23:13:23.759839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.806 [2024-06-09 23:13:23.760020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.806 [2024-06-09 23:13:23.760148] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.806 [2024-06-09 23:13:23.760157] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.806 [2024-06-09 23:13:23.760164] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.806 [2024-06-09 23:13:23.762549] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.807 [2024-06-09 23:13:23.770873] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.807 [2024-06-09 23:13:23.771645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.772166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.772180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.807 [2024-06-09 23:13:23.772189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.807 [2024-06-09 23:13:23.772351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.807 [2024-06-09 23:13:23.772503] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.807 [2024-06-09 23:13:23.772513] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.807 [2024-06-09 23:13:23.772521] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.807 [2024-06-09 23:13:23.774678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.807 [2024-06-09 23:13:23.783385] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.807 [2024-06-09 23:13:23.784062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.784668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.784706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.807 [2024-06-09 23:13:23.784717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.807 [2024-06-09 23:13:23.784824] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.807 [2024-06-09 23:13:23.785007] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.807 [2024-06-09 23:13:23.785016] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.807 [2024-06-09 23:13:23.785024] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.807 [2024-06-09 23:13:23.787143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.807 [2024-06-09 23:13:23.795869] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.807 [2024-06-09 23:13:23.796369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.796779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.796817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.807 [2024-06-09 23:13:23.796828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.807 [2024-06-09 23:13:23.797026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.807 [2024-06-09 23:13:23.797191] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.807 [2024-06-09 23:13:23.797200] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.807 [2024-06-09 23:13:23.797208] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.807 [2024-06-09 23:13:23.799477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.807 [2024-06-09 23:13:23.808357] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.807 [2024-06-09 23:13:23.809082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.809465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.809480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.807 [2024-06-09 23:13:23.809489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.807 [2024-06-09 23:13:23.809669] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.807 [2024-06-09 23:13:23.809816] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.807 [2024-06-09 23:13:23.809825] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.807 [2024-06-09 23:13:23.809832] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.807 [2024-06-09 23:13:23.811987] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.807 [2024-06-09 23:13:23.821040] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.807 [2024-06-09 23:13:23.821835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.822350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.822364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.807 [2024-06-09 23:13:23.822374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.807 [2024-06-09 23:13:23.822488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.807 [2024-06-09 23:13:23.822616] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.807 [2024-06-09 23:13:23.822625] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.807 [2024-06-09 23:13:23.822632] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.807 [2024-06-09 23:13:23.825015] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.807 [2024-06-09 23:13:23.833599] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.807 [2024-06-09 23:13:23.834222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.834833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.834871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.807 [2024-06-09 23:13:23.834881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.807 [2024-06-09 23:13:23.835080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.807 [2024-06-09 23:13:23.835209] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.807 [2024-06-09 23:13:23.835218] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.807 [2024-06-09 23:13:23.835226] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.807 [2024-06-09 23:13:23.837203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.807 [2024-06-09 23:13:23.846213] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.807 [2024-06-09 23:13:23.846798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.847168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.847179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.807 [2024-06-09 23:13:23.847187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.807 [2024-06-09 23:13:23.847348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.807 [2024-06-09 23:13:23.847512] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.807 [2024-06-09 23:13:23.847521] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.807 [2024-06-09 23:13:23.847529] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.807 [2024-06-09 23:13:23.849917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.807 [2024-06-09 23:13:23.858749] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.807 [2024-06-09 23:13:23.859411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.859928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.859937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.807 [2024-06-09 23:13:23.859945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.807 [2024-06-09 23:13:23.860034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.807 [2024-06-09 23:13:23.860140] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.807 [2024-06-09 23:13:23.860152] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.807 [2024-06-09 23:13:23.860159] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.807 [2024-06-09 23:13:23.862397] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.807 [2024-06-09 23:13:23.871428] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.807 [2024-06-09 23:13:23.872155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.872786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.807 [2024-06-09 23:13:23.872824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.807 [2024-06-09 23:13:23.872835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.807 [2024-06-09 23:13:23.873033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.807 [2024-06-09 23:13:23.873161] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.807 [2024-06-09 23:13:23.873169] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.808 [2024-06-09 23:13:23.873177] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.808 [2024-06-09 23:13:23.875554] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.808 [2024-06-09 23:13:23.884014] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.808 [2024-06-09 23:13:23.884693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.885226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.885238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.808 [2024-06-09 23:13:23.885247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.808 [2024-06-09 23:13:23.885435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.808 [2024-06-09 23:13:23.885527] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.808 [2024-06-09 23:13:23.885536] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.808 [2024-06-09 23:13:23.885543] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.808 [2024-06-09 23:13:23.887660] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.808 [2024-06-09 23:13:23.896620] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.808 [2024-06-09 23:13:23.897373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.897897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.897915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.808 [2024-06-09 23:13:23.897925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.808 [2024-06-09 23:13:23.898087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.808 [2024-06-09 23:13:23.898200] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.808 [2024-06-09 23:13:23.898208] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.808 [2024-06-09 23:13:23.898216] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.808 [2024-06-09 23:13:23.900572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.808 [2024-06-09 23:13:23.909046] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.808 [2024-06-09 23:13:23.909810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.910359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.910371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.808 [2024-06-09 23:13:23.910381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.808 [2024-06-09 23:13:23.910567] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.808 [2024-06-09 23:13:23.910733] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.808 [2024-06-09 23:13:23.910741] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.808 [2024-06-09 23:13:23.910748] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.808 [2024-06-09 23:13:23.913065] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.808 [2024-06-09 23:13:23.921583] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.808 [2024-06-09 23:13:23.922336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.922876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.922890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.808 [2024-06-09 23:13:23.922899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.808 [2024-06-09 23:13:23.923061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.808 [2024-06-09 23:13:23.923207] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.808 [2024-06-09 23:13:23.923216] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.808 [2024-06-09 23:13:23.923223] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.808 [2024-06-09 23:13:23.925519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.808 [2024-06-09 23:13:23.934214] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.808 [2024-06-09 23:13:23.934909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.935447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.935462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.808 [2024-06-09 23:13:23.935475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.808 [2024-06-09 23:13:23.935638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.808 [2024-06-09 23:13:23.935784] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.808 [2024-06-09 23:13:23.935793] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.808 [2024-06-09 23:13:23.935800] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.808 [2024-06-09 23:13:23.938155] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.808 [2024-06-09 23:13:23.946840] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.808 [2024-06-09 23:13:23.947587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.948120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.948134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.808 [2024-06-09 23:13:23.948144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.808 [2024-06-09 23:13:23.948324] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.808 [2024-06-09 23:13:23.948457] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.808 [2024-06-09 23:13:23.948467] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.808 [2024-06-09 23:13:23.948474] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.808 [2024-06-09 23:13:23.950863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.808 [2024-06-09 23:13:23.959370] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.808 [2024-06-09 23:13:23.960054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.960580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.960595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.808 [2024-06-09 23:13:23.960605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.808 [2024-06-09 23:13:23.960731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.808 [2024-06-09 23:13:23.960858] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.808 [2024-06-09 23:13:23.960866] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.808 [2024-06-09 23:13:23.960873] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.808 [2024-06-09 23:13:23.963212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:55.808 [2024-06-09 23:13:23.971769] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.808 [2024-06-09 23:13:23.972506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.973044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.808 [2024-06-09 23:13:23.973058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:55.808 [2024-06-09 23:13:23.973067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:55.808 [2024-06-09 23:13:23.973251] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:55.808 [2024-06-09 23:13:23.973423] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:55.808 [2024-06-09 23:13:23.973433] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:55.808 [2024-06-09 23:13:23.973440] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.808 [2024-06-09 23:13:23.975666] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.070 [2024-06-09 23:13:23.984314] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.070 [2024-06-09 23:13:23.985000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-06-09 23:13:23.985282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-06-09 23:13:23.985299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.070 [2024-06-09 23:13:23.985308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.070 [2024-06-09 23:13:23.985423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.070 [2024-06-09 23:13:23.985585] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.070 [2024-06-09 23:13:23.985593] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.070 [2024-06-09 23:13:23.985599] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.070 [2024-06-09 23:13:23.987911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.070 [2024-06-09 23:13:23.996829] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.070 [2024-06-09 23:13:23.997513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-06-09 23:13:23.998060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-06-09 23:13:23.998073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.070 [2024-06-09 23:13:23.998083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.070 [2024-06-09 23:13:23.998262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.070 [2024-06-09 23:13:23.998416] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.070 [2024-06-09 23:13:23.998425] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.070 [2024-06-09 23:13:23.998433] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.070 [2024-06-09 23:13:24.000440] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.070 [2024-06-09 23:13:24.009218] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.070 [2024-06-09 23:13:24.009887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-06-09 23:13:24.010307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-06-09 23:13:24.010320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.070 [2024-06-09 23:13:24.010329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.070 [2024-06-09 23:13:24.010498] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.070 [2024-06-09 23:13:24.010630] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.070 [2024-06-09 23:13:24.010640] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.070 [2024-06-09 23:13:24.010647] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.070 [2024-06-09 23:13:24.012820] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.070 [2024-06-09 23:13:24.021837] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.070 [2024-06-09 23:13:24.022617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-06-09 23:13:24.023202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.023215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.071 [2024-06-09 23:13:24.023224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.071 [2024-06-09 23:13:24.023386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.071 [2024-06-09 23:13:24.023567] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.071 [2024-06-09 23:13:24.023576] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.071 [2024-06-09 23:13:24.023583] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.071 [2024-06-09 23:13:24.025954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.071 [2024-06-09 23:13:24.034337] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.071 [2024-06-09 23:13:24.035024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.035629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.035667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.071 [2024-06-09 23:13:24.035679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.071 [2024-06-09 23:13:24.035860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.071 [2024-06-09 23:13:24.035988] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.071 [2024-06-09 23:13:24.035999] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.071 [2024-06-09 23:13:24.036006] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.071 [2024-06-09 23:13:24.038514] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.071 [2024-06-09 23:13:24.046719] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.071 [2024-06-09 23:13:24.047398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.048021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.048058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.071 [2024-06-09 23:13:24.048069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.071 [2024-06-09 23:13:24.048193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.071 [2024-06-09 23:13:24.048303] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.071 [2024-06-09 23:13:24.048311] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.071 [2024-06-09 23:13:24.048322] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.071 [2024-06-09 23:13:24.050468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.071 [2024-06-09 23:13:24.059133] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.071 [2024-06-09 23:13:24.059877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.060430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.060445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.071 [2024-06-09 23:13:24.060455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.071 [2024-06-09 23:13:24.060617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.071 [2024-06-09 23:13:24.060763] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.071 [2024-06-09 23:13:24.060771] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.071 [2024-06-09 23:13:24.060778] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.071 [2024-06-09 23:13:24.063098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.071 [2024-06-09 23:13:24.071740] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.071 [2024-06-09 23:13:24.072355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.072936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.072973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.071 [2024-06-09 23:13:24.072984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.071 [2024-06-09 23:13:24.073127] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.071 [2024-06-09 23:13:24.073273] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.071 [2024-06-09 23:13:24.073281] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.071 [2024-06-09 23:13:24.073288] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.071 [2024-06-09 23:13:24.075519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.071 [2024-06-09 23:13:24.084375] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.071 [2024-06-09 23:13:24.084979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.085524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.085539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.071 [2024-06-09 23:13:24.085548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.071 [2024-06-09 23:13:24.085764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.071 [2024-06-09 23:13:24.085911] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.071 [2024-06-09 23:13:24.085919] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.071 [2024-06-09 23:13:24.085926] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.071 [2024-06-09 23:13:24.088304] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.071 [2024-06-09 23:13:24.097095] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.071 [2024-06-09 23:13:24.097888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.098411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.098424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.071 [2024-06-09 23:13:24.098433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.071 [2024-06-09 23:13:24.098577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.071 [2024-06-09 23:13:24.098722] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.071 [2024-06-09 23:13:24.098731] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.071 [2024-06-09 23:13:24.098738] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.071 [2024-06-09 23:13:24.101020] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.071 [2024-06-09 23:13:24.109429] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.071 [2024-06-09 23:13:24.110164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.110708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.110724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.071 [2024-06-09 23:13:24.110733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.071 [2024-06-09 23:13:24.110895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.071 [2024-06-09 23:13:24.110986] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.071 [2024-06-09 23:13:24.110994] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.071 [2024-06-09 23:13:24.111001] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.071 [2024-06-09 23:13:24.113138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.071 [2024-06-09 23:13:24.121830] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.071 [2024-06-09 23:13:24.122658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.123172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.123185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.071 [2024-06-09 23:13:24.123194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.071 [2024-06-09 23:13:24.123392] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.071 [2024-06-09 23:13:24.123536] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.071 [2024-06-09 23:13:24.123544] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.071 [2024-06-09 23:13:24.123551] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.071 [2024-06-09 23:13:24.125540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.071 [2024-06-09 23:13:24.134237] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.071 [2024-06-09 23:13:24.134892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.135392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-06-09 23:13:24.135411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.071 [2024-06-09 23:13:24.135421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.071 [2024-06-09 23:13:24.135565] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.071 [2024-06-09 23:13:24.135746] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.071 [2024-06-09 23:13:24.135755] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.072 [2024-06-09 23:13:24.135762] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.072 [2024-06-09 23:13:24.138077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.072 [2024-06-09 23:13:24.146582] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.072 [2024-06-09 23:13:24.147200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.147713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.147749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.072 [2024-06-09 23:13:24.147760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.072 [2024-06-09 23:13:24.147885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.072 [2024-06-09 23:13:24.148067] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.072 [2024-06-09 23:13:24.148075] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.072 [2024-06-09 23:13:24.148082] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.072 [2024-06-09 23:13:24.150477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.072 [2024-06-09 23:13:24.159167] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.072 [2024-06-09 23:13:24.159861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.160282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.160294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.072 [2024-06-09 23:13:24.160304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.072 [2024-06-09 23:13:24.160455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.072 [2024-06-09 23:13:24.160564] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.072 [2024-06-09 23:13:24.160573] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.072 [2024-06-09 23:13:24.160580] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.072 [2024-06-09 23:13:24.162554] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.072 [2024-06-09 23:13:24.171540] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.072 [2024-06-09 23:13:24.172138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.172753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.172789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.072 [2024-06-09 23:13:24.172800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.072 [2024-06-09 23:13:24.172980] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.072 [2024-06-09 23:13:24.173108] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.072 [2024-06-09 23:13:24.173116] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.072 [2024-06-09 23:13:24.173123] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.072 [2024-06-09 23:13:24.175485] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.072 [2024-06-09 23:13:24.183971] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.072 [2024-06-09 23:13:24.184773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.185356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.185369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.072 [2024-06-09 23:13:24.185379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.072 [2024-06-09 23:13:24.185510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.072 [2024-06-09 23:13:24.185620] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.072 [2024-06-09 23:13:24.185628] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.072 [2024-06-09 23:13:24.185635] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.072 [2024-06-09 23:13:24.187844] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.072 [2024-06-09 23:13:24.196451] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.072 [2024-06-09 23:13:24.197073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.197706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.197743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.072 [2024-06-09 23:13:24.197754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.072 [2024-06-09 23:13:24.197934] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.072 [2024-06-09 23:13:24.198116] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.072 [2024-06-09 23:13:24.198125] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.072 [2024-06-09 23:13:24.198132] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.072 [2024-06-09 23:13:24.200419] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.072 [2024-06-09 23:13:24.208807] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.072 [2024-06-09 23:13:24.209453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.210028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.210038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.072 [2024-06-09 23:13:24.210046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.072 [2024-06-09 23:13:24.210226] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.072 [2024-06-09 23:13:24.210369] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.072 [2024-06-09 23:13:24.210377] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.072 [2024-06-09 23:13:24.210384] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.072 [2024-06-09 23:13:24.212537] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.072 [2024-06-09 23:13:24.221479] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.072 [2024-06-09 23:13:24.222086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.222699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.222736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.072 [2024-06-09 23:13:24.222746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.072 [2024-06-09 23:13:24.222927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.072 [2024-06-09 23:13:24.223036] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.072 [2024-06-09 23:13:24.223044] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.072 [2024-06-09 23:13:24.223051] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.072 [2024-06-09 23:13:24.225308] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.072 [2024-06-09 23:13:24.234102] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.072 [2024-06-09 23:13:24.234861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.235432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.235455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.072 [2024-06-09 23:13:24.235464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.072 [2024-06-09 23:13:24.235626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.072 [2024-06-09 23:13:24.235754] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.072 [2024-06-09 23:13:24.235762] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.072 [2024-06-09 23:13:24.235770] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.072 [2024-06-09 23:13:24.237854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.072 [2024-06-09 23:13:24.246539] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.072 [2024-06-09 23:13:24.247188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.247767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-06-09 23:13:24.247803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.072 [2024-06-09 23:13:24.247818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.334 [2024-06-09 23:13:24.247962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.334 [2024-06-09 23:13:24.248091] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.334 [2024-06-09 23:13:24.248100] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.334 [2024-06-09 23:13:24.248108] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.334 [2024-06-09 23:13:24.250432] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.334 [2024-06-09 23:13:24.258841] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.334 [2024-06-09 23:13:24.259608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.334 [2024-06-09 23:13:24.260172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.334 [2024-06-09 23:13:24.260185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.334 [2024-06-09 23:13:24.260194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.334 [2024-06-09 23:13:24.260356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.334 [2024-06-09 23:13:24.260507] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.334 [2024-06-09 23:13:24.260515] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.334 [2024-06-09 23:13:24.260523] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.334 [2024-06-09 23:13:24.262657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.334 [2024-06-09 23:13:24.271324] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.334 [2024-06-09 23:13:24.272103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.334 [2024-06-09 23:13:24.272622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.334 [2024-06-09 23:13:24.272637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.334 [2024-06-09 23:13:24.272646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.334 [2024-06-09 23:13:24.272808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.334 [2024-06-09 23:13:24.272917] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.334 [2024-06-09 23:13:24.272925] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.334 [2024-06-09 23:13:24.272932] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.334 [2024-06-09 23:13:24.275249] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.334 [2024-06-09 23:13:24.283861] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.334 [2024-06-09 23:13:24.284653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.334 [2024-06-09 23:13:24.285219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.334 [2024-06-09 23:13:24.285231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.334 [2024-06-09 23:13:24.285240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.334 [2024-06-09 23:13:24.285416] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.334 [2024-06-09 23:13:24.285545] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.334 [2024-06-09 23:13:24.285553] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.334 [2024-06-09 23:13:24.285560] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.334 [2024-06-09 23:13:24.287951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.334 [2024-06-09 23:13:24.296334] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.334 [2024-06-09 23:13:24.296975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.297266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.297278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.335 [2024-06-09 23:13:24.297288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.335 [2024-06-09 23:13:24.297492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.335 [2024-06-09 23:13:24.297656] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.335 [2024-06-09 23:13:24.297664] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.335 [2024-06-09 23:13:24.297671] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.335 [2024-06-09 23:13:24.299897] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.335 [2024-06-09 23:13:24.308774] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.335 [2024-06-09 23:13:24.309510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.309910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.309923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.335 [2024-06-09 23:13:24.309932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.335 [2024-06-09 23:13:24.310039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.335 [2024-06-09 23:13:24.310184] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.335 [2024-06-09 23:13:24.310193] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.335 [2024-06-09 23:13:24.310200] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.335 [2024-06-09 23:13:24.312392] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.335 [2024-06-09 23:13:24.321227] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.335 [2024-06-09 23:13:24.321908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.322436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.322454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.335 [2024-06-09 23:13:24.322463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.335 [2024-06-09 23:13:24.322591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.335 [2024-06-09 23:13:24.322723] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.335 [2024-06-09 23:13:24.322732] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.335 [2024-06-09 23:13:24.322739] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.335 [2024-06-09 23:13:24.325064] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.335 [2024-06-09 23:13:24.333591] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.335 [2024-06-09 23:13:24.334341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.334900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.334914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.335 [2024-06-09 23:13:24.334924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.335 [2024-06-09 23:13:24.335104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.335 [2024-06-09 23:13:24.335250] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.335 [2024-06-09 23:13:24.335258] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.335 [2024-06-09 23:13:24.335265] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.335 [2024-06-09 23:13:24.337568] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.335 [2024-06-09 23:13:24.346332] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.335 [2024-06-09 23:13:24.347121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.347619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.347633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.335 [2024-06-09 23:13:24.347643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.335 [2024-06-09 23:13:24.347804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.335 [2024-06-09 23:13:24.347968] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.335 [2024-06-09 23:13:24.347976] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.335 [2024-06-09 23:13:24.347983] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.335 [2024-06-09 23:13:24.350432] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.335 [2024-06-09 23:13:24.358954] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.335 [2024-06-09 23:13:24.359600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.360116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.360125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.335 [2024-06-09 23:13:24.360133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.335 [2024-06-09 23:13:24.360275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.335 [2024-06-09 23:13:24.360381] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.335 [2024-06-09 23:13:24.360393] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.335 [2024-06-09 23:13:24.360400] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.335 [2024-06-09 23:13:24.362461] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.335 [2024-06-09 23:13:24.371223] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.335 [2024-06-09 23:13:24.371976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.372516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.372529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.335 [2024-06-09 23:13:24.372539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.335 [2024-06-09 23:13:24.372700] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.335 [2024-06-09 23:13:24.372882] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.335 [2024-06-09 23:13:24.372890] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.335 [2024-06-09 23:13:24.372897] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.335 [2024-06-09 23:13:24.375052] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.335 [2024-06-09 23:13:24.383711] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.335 [2024-06-09 23:13:24.384373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.384884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.384896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.335 [2024-06-09 23:13:24.384904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.335 [2024-06-09 23:13:24.385029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.335 [2024-06-09 23:13:24.385171] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.335 [2024-06-09 23:13:24.385178] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.335 [2024-06-09 23:13:24.385185] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.335 [2024-06-09 23:13:24.387555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.335 [2024-06-09 23:13:24.396126] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.335 [2024-06-09 23:13:24.396855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.397390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.397411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.335 [2024-06-09 23:13:24.397421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.335 [2024-06-09 23:13:24.397619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.335 [2024-06-09 23:13:24.397729] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.335 [2024-06-09 23:13:24.397736] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.335 [2024-06-09 23:13:24.397747] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.335 [2024-06-09 23:13:24.399957] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.335 [2024-06-09 23:13:24.408426] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.335 [2024-06-09 23:13:24.409078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.409686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.335 [2024-06-09 23:13:24.409722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.335 [2024-06-09 23:13:24.409734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.335 [2024-06-09 23:13:24.409934] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.335 [2024-06-09 23:13:24.410080] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.336 [2024-06-09 23:13:24.410088] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.336 [2024-06-09 23:13:24.410096] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.336 [2024-06-09 23:13:24.412399] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.336 [2024-06-09 23:13:24.420784] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.336 [2024-06-09 23:13:24.421481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.422030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.422043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.336 [2024-06-09 23:13:24.422052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.336 [2024-06-09 23:13:24.422195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.336 [2024-06-09 23:13:24.422322] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.336 [2024-06-09 23:13:24.422330] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.336 [2024-06-09 23:13:24.422338] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.336 [2024-06-09 23:13:24.424723] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.336 [2024-06-09 23:13:24.433479] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.336 [2024-06-09 23:13:24.434280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.434823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.434838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.336 [2024-06-09 23:13:24.434847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.336 [2024-06-09 23:13:24.435009] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.336 [2024-06-09 23:13:24.435173] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.336 [2024-06-09 23:13:24.435181] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.336 [2024-06-09 23:13:24.435188] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.336 [2024-06-09 23:13:24.437497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.336 [2024-06-09 23:13:24.446015] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.336 [2024-06-09 23:13:24.446803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.447345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.447357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.336 [2024-06-09 23:13:24.447367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.336 [2024-06-09 23:13:24.447519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.336 [2024-06-09 23:13:24.447665] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.336 [2024-06-09 23:13:24.447673] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.336 [2024-06-09 23:13:24.447680] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.336 [2024-06-09 23:13:24.449981] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.336 [2024-06-09 23:13:24.458619] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.336 [2024-06-09 23:13:24.459366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.459923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.459937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.336 [2024-06-09 23:13:24.459946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.336 [2024-06-09 23:13:24.460144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.336 [2024-06-09 23:13:24.460272] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.336 [2024-06-09 23:13:24.460279] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.336 [2024-06-09 23:13:24.460286] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.336 [2024-06-09 23:13:24.462462] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.336 [2024-06-09 23:13:24.471088] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.336 [2024-06-09 23:13:24.471727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.472263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.472276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.336 [2024-06-09 23:13:24.472285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.336 [2024-06-09 23:13:24.472440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.336 [2024-06-09 23:13:24.472586] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.336 [2024-06-09 23:13:24.472594] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.336 [2024-06-09 23:13:24.472601] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.336 [2024-06-09 23:13:24.474772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.336 [2024-06-09 23:13:24.483471] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.336 [2024-06-09 23:13:24.484175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.484796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.484833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.336 [2024-06-09 23:13:24.484843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.336 [2024-06-09 23:13:24.485005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.336 [2024-06-09 23:13:24.485151] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.336 [2024-06-09 23:13:24.485159] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.336 [2024-06-09 23:13:24.485167] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.336 [2024-06-09 23:13:24.487306] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.336 [2024-06-09 23:13:24.496068] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.336 [2024-06-09 23:13:24.496815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.497351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.497363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.336 [2024-06-09 23:13:24.497373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.336 [2024-06-09 23:13:24.497577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.336 [2024-06-09 23:13:24.497687] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.336 [2024-06-09 23:13:24.497695] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.336 [2024-06-09 23:13:24.497702] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.336 [2024-06-09 23:13:24.499873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.336 [2024-06-09 23:13:24.508625] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.336 [2024-06-09 23:13:24.509451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.509949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.336 [2024-06-09 23:13:24.509962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.336 [2024-06-09 23:13:24.509971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.336 [2024-06-09 23:13:24.510133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.336 [2024-06-09 23:13:24.510296] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.336 [2024-06-09 23:13:24.510304] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.336 [2024-06-09 23:13:24.510312] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.599 [2024-06-09 23:13:24.512457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.599 [2024-06-09 23:13:24.521168] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.599 [2024-06-09 23:13:24.521878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.522425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.522439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.599 [2024-06-09 23:13:24.522448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.599 [2024-06-09 23:13:24.522645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.599 [2024-06-09 23:13:24.522755] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.599 [2024-06-09 23:13:24.522762] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.599 [2024-06-09 23:13:24.522770] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.599 [2024-06-09 23:13:24.524842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.599 [2024-06-09 23:13:24.533705] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.599 [2024-06-09 23:13:24.534444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.534949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.534962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.599 [2024-06-09 23:13:24.534972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.599 [2024-06-09 23:13:24.535188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.599 [2024-06-09 23:13:24.535315] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.599 [2024-06-09 23:13:24.535323] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.599 [2024-06-09 23:13:24.535330] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.599 [2024-06-09 23:13:24.537674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.599 [2024-06-09 23:13:24.546217] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.599 [2024-06-09 23:13:24.546958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.547609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.547646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.599 [2024-06-09 23:13:24.547658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.599 [2024-06-09 23:13:24.547823] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.599 [2024-06-09 23:13:24.547932] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.599 [2024-06-09 23:13:24.547940] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.599 [2024-06-09 23:13:24.547948] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.599 [2024-06-09 23:13:24.550124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.599 [2024-06-09 23:13:24.558649] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.599 [2024-06-09 23:13:24.559248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.559818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.559836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.599 [2024-06-09 23:13:24.559845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.599 [2024-06-09 23:13:24.559988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.599 [2024-06-09 23:13:24.560188] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.599 [2024-06-09 23:13:24.560196] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.599 [2024-06-09 23:13:24.560204] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.599 [2024-06-09 23:13:24.562470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.599 [2024-06-09 23:13:24.571173] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.599 [2024-06-09 23:13:24.571918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.572460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.572482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.599 [2024-06-09 23:13:24.572492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.599 [2024-06-09 23:13:24.572691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.599 [2024-06-09 23:13:24.572800] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.599 [2024-06-09 23:13:24.572808] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.599 [2024-06-09 23:13:24.572815] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.599 [2024-06-09 23:13:24.575118] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.599 [2024-06-09 23:13:24.583658] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.599 [2024-06-09 23:13:24.584389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.584938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.584951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.599 [2024-06-09 23:13:24.584961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.599 [2024-06-09 23:13:24.585085] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.599 [2024-06-09 23:13:24.585212] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.599 [2024-06-09 23:13:24.585221] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.599 [2024-06-09 23:13:24.585228] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.599 [2024-06-09 23:13:24.587567] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.599 [2024-06-09 23:13:24.595985] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.599 [2024-06-09 23:13:24.596794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.597328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.597341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.599 [2024-06-09 23:13:24.597354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.599 [2024-06-09 23:13:24.597524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.599 [2024-06-09 23:13:24.597634] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.599 [2024-06-09 23:13:24.597642] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.599 [2024-06-09 23:13:24.597650] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.599 [2024-06-09 23:13:24.600021] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.599 [2024-06-09 23:13:24.608332] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.599 [2024-06-09 23:13:24.608803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.609298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.609312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.599 [2024-06-09 23:13:24.609321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.599 [2024-06-09 23:13:24.609512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.599 [2024-06-09 23:13:24.609661] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.599 [2024-06-09 23:13:24.609669] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.599 [2024-06-09 23:13:24.609677] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.599 [2024-06-09 23:13:24.611973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.599 [2024-06-09 23:13:24.620917] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.599 [2024-06-09 23:13:24.621674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.622209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.599 [2024-06-09 23:13:24.622221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.600 [2024-06-09 23:13:24.622231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.600 [2024-06-09 23:13:24.622392] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.600 [2024-06-09 23:13:24.622547] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.600 [2024-06-09 23:13:24.622556] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.600 [2024-06-09 23:13:24.622563] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.600 [2024-06-09 23:13:24.624742] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.600 [2024-06-09 23:13:24.633443] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.600 [2024-06-09 23:13:24.634230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.634745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.634759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.600 [2024-06-09 23:13:24.634768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.600 [2024-06-09 23:13:24.634916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.600 [2024-06-09 23:13:24.635025] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.600 [2024-06-09 23:13:24.635033] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.600 [2024-06-09 23:13:24.635040] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.600 [2024-06-09 23:13:24.637523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.600 [2024-06-09 23:13:24.645876] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.600 [2024-06-09 23:13:24.646414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.646987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.647024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.600 [2024-06-09 23:13:24.647034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.600 [2024-06-09 23:13:24.647196] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.600 [2024-06-09 23:13:24.647324] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.600 [2024-06-09 23:13:24.647332] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.600 [2024-06-09 23:13:24.647339] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.600 [2024-06-09 23:13:24.649444] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.600 [2024-06-09 23:13:24.658344] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.600 [2024-06-09 23:13:24.659103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.659723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.659760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.600 [2024-06-09 23:13:24.659770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.600 [2024-06-09 23:13:24.659913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.600 [2024-06-09 23:13:24.660077] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.600 [2024-06-09 23:13:24.660086] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.600 [2024-06-09 23:13:24.660093] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.600 [2024-06-09 23:13:24.662267] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.600 [2024-06-09 23:13:24.670902] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.600 [2024-06-09 23:13:24.671682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.672223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.672236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.600 [2024-06-09 23:13:24.672246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.600 [2024-06-09 23:13:24.672433] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.600 [2024-06-09 23:13:24.672620] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.600 [2024-06-09 23:13:24.672629] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.600 [2024-06-09 23:13:24.672636] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.600 [2024-06-09 23:13:24.674934] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.600 [2024-06-09 23:13:24.683355] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.600 [2024-06-09 23:13:24.684123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.684752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.684788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.600 [2024-06-09 23:13:24.684799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.600 [2024-06-09 23:13:24.684979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.600 [2024-06-09 23:13:24.685161] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.600 [2024-06-09 23:13:24.685170] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.600 [2024-06-09 23:13:24.685177] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.600 [2024-06-09 23:13:24.687445] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.600 [2024-06-09 23:13:24.695905] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.600 [2024-06-09 23:13:24.696657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.697198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.697210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.600 [2024-06-09 23:13:24.697220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.600 [2024-06-09 23:13:24.697381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.600 [2024-06-09 23:13:24.697553] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.600 [2024-06-09 23:13:24.697562] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.600 [2024-06-09 23:13:24.697569] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.600 [2024-06-09 23:13:24.699866] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.600 [2024-06-09 23:13:24.708375] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.600 [2024-06-09 23:13:24.709117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.709738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.709775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.600 [2024-06-09 23:13:24.709785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.600 [2024-06-09 23:13:24.709929] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.600 [2024-06-09 23:13:24.710074] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.600 [2024-06-09 23:13:24.710087] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.600 [2024-06-09 23:13:24.710095] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.600 [2024-06-09 23:13:24.712436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.600 [2024-06-09 23:13:24.720881] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.600 [2024-06-09 23:13:24.721501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.721865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.721877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.600 [2024-06-09 23:13:24.721887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.600 [2024-06-09 23:13:24.722103] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.600 [2024-06-09 23:13:24.722285] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.600 [2024-06-09 23:13:24.722293] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.600 [2024-06-09 23:13:24.722300] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.600 [2024-06-09 23:13:24.724561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.600 [2024-06-09 23:13:24.733541] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.600 [2024-06-09 23:13:24.734286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.734826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.600 [2024-06-09 23:13:24.734841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.600 [2024-06-09 23:13:24.734850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.600 [2024-06-09 23:13:24.735011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.600 [2024-06-09 23:13:24.735157] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.600 [2024-06-09 23:13:24.735165] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.600 [2024-06-09 23:13:24.735172] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.601 [2024-06-09 23:13:24.737326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.601 [2024-06-09 23:13:24.745963] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.601 [2024-06-09 23:13:24.746622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.601 [2024-06-09 23:13:24.747145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.601 [2024-06-09 23:13:24.747154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.601 [2024-06-09 23:13:24.747162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.601 [2024-06-09 23:13:24.747250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.601 [2024-06-09 23:13:24.747392] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.601 [2024-06-09 23:13:24.747400] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.601 [2024-06-09 23:13:24.747415] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.601 [2024-06-09 23:13:24.749651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.601 [2024-06-09 23:13:24.758391] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.601 [2024-06-09 23:13:24.759027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.601 [2024-06-09 23:13:24.759645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.601 [2024-06-09 23:13:24.759682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.601 [2024-06-09 23:13:24.759693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.601 [2024-06-09 23:13:24.759800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.601 [2024-06-09 23:13:24.759946] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.601 [2024-06-09 23:13:24.759954] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.601 [2024-06-09 23:13:24.759961] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.601 [2024-06-09 23:13:24.762358] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.601 [2024-06-09 23:13:24.770763] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.601 [2024-06-09 23:13:24.771419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.601 [2024-06-09 23:13:24.771896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.601 [2024-06-09 23:13:24.771905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.601 [2024-06-09 23:13:24.771913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.601 [2024-06-09 23:13:24.772038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.601 [2024-06-09 23:13:24.772198] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.601 [2024-06-09 23:13:24.772206] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.601 [2024-06-09 23:13:24.772212] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.601 [2024-06-09 23:13:24.774491] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.864 [2024-06-09 23:13:24.783293] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.864 [2024-06-09 23:13:24.784020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.784519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.784533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.864 [2024-06-09 23:13:24.784543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.864 [2024-06-09 23:13:24.784650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.864 [2024-06-09 23:13:24.784759] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.864 [2024-06-09 23:13:24.784767] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.864 [2024-06-09 23:13:24.784774] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.864 [2024-06-09 23:13:24.787058] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.864 [2024-06-09 23:13:24.795906] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.864 [2024-06-09 23:13:24.796659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.797191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.797204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.864 [2024-06-09 23:13:24.797214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.864 [2024-06-09 23:13:24.797357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.864 [2024-06-09 23:13:24.797510] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.864 [2024-06-09 23:13:24.797519] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.864 [2024-06-09 23:13:24.797526] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.864 [2024-06-09 23:13:24.799914] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.864 [2024-06-09 23:13:24.808423] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.864 [2024-06-09 23:13:24.809204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.809738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.809753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.864 [2024-06-09 23:13:24.809762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.864 [2024-06-09 23:13:24.809924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.864 [2024-06-09 23:13:24.810051] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.864 [2024-06-09 23:13:24.810060] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.864 [2024-06-09 23:13:24.810067] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.864 [2024-06-09 23:13:24.812238] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.864 [2024-06-09 23:13:24.821061] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.864 [2024-06-09 23:13:24.821812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.822347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.822359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.864 [2024-06-09 23:13:24.822369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.864 [2024-06-09 23:13:24.822593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.864 [2024-06-09 23:13:24.822721] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.864 [2024-06-09 23:13:24.822729] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.864 [2024-06-09 23:13:24.822736] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.864 [2024-06-09 23:13:24.824991] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.864 [2024-06-09 23:13:24.833476] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.864 [2024-06-09 23:13:24.834239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.834873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.834910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.864 [2024-06-09 23:13:24.834920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.864 [2024-06-09 23:13:24.835119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.864 [2024-06-09 23:13:24.835283] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.864 [2024-06-09 23:13:24.835292] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.864 [2024-06-09 23:13:24.835299] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.864 [2024-06-09 23:13:24.837530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.864 [2024-06-09 23:13:24.845836] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.864 [2024-06-09 23:13:24.846627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.847158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.847171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.864 [2024-06-09 23:13:24.847180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.864 [2024-06-09 23:13:24.847378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.864 [2024-06-09 23:13:24.847494] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.864 [2024-06-09 23:13:24.847502] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.864 [2024-06-09 23:13:24.847509] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.864 [2024-06-09 23:13:24.849744] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.864 [2024-06-09 23:13:24.858208] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.864 [2024-06-09 23:13:24.858923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.859295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.859308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.864 [2024-06-09 23:13:24.859317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.864 [2024-06-09 23:13:24.859470] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.864 [2024-06-09 23:13:24.859653] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.864 [2024-06-09 23:13:24.859661] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.864 [2024-06-09 23:13:24.859668] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.864 [2024-06-09 23:13:24.861803] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.864 [2024-06-09 23:13:24.870942] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.864 [2024-06-09 23:13:24.871572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.872104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.872117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.864 [2024-06-09 23:13:24.872127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.864 [2024-06-09 23:13:24.872270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.864 [2024-06-09 23:13:24.872443] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.864 [2024-06-09 23:13:24.872452] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.864 [2024-06-09 23:13:24.872460] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.864 [2024-06-09 23:13:24.874775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.864 [2024-06-09 23:13:24.883111] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.864 [2024-06-09 23:13:24.883894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.884437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.864 [2024-06-09 23:13:24.884461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.864 [2024-06-09 23:13:24.884470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.865 [2024-06-09 23:13:24.884596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.865 [2024-06-09 23:13:24.884778] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.865 [2024-06-09 23:13:24.884786] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.865 [2024-06-09 23:13:24.884794] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.865 [2024-06-09 23:13:24.886933] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.865 [2024-06-09 23:13:24.895450] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.865 [2024-06-09 23:13:24.896257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.896805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.896819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.865 [2024-06-09 23:13:24.896829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.865 [2024-06-09 23:13:24.897027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.865 [2024-06-09 23:13:24.897191] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.865 [2024-06-09 23:13:24.897200] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.865 [2024-06-09 23:13:24.897207] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.865 [2024-06-09 23:13:24.899380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.865 [2024-06-09 23:13:24.907905] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.865 [2024-06-09 23:13:24.908626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.909164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.909177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.865 [2024-06-09 23:13:24.909190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.865 [2024-06-09 23:13:24.909370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.865 [2024-06-09 23:13:24.909543] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.865 [2024-06-09 23:13:24.909551] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.865 [2024-06-09 23:13:24.909559] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.865 [2024-06-09 23:13:24.911783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.865 [2024-06-09 23:13:24.920294] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.865 [2024-06-09 23:13:24.921058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.921689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.921726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.865 [2024-06-09 23:13:24.921737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.865 [2024-06-09 23:13:24.921862] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.865 [2024-06-09 23:13:24.922008] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.865 [2024-06-09 23:13:24.922016] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.865 [2024-06-09 23:13:24.922023] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.865 [2024-06-09 23:13:24.924316] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.865 [2024-06-09 23:13:24.932704] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.865 [2024-06-09 23:13:24.933451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.934001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.934014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.865 [2024-06-09 23:13:24.934023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.865 [2024-06-09 23:13:24.934185] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.865 [2024-06-09 23:13:24.934312] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.865 [2024-06-09 23:13:24.934321] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.865 [2024-06-09 23:13:24.934328] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.865 [2024-06-09 23:13:24.936724] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.865 [2024-06-09 23:13:24.945192] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.865 [2024-06-09 23:13:24.945897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.946431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.946445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.865 [2024-06-09 23:13:24.946455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.865 [2024-06-09 23:13:24.946657] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.865 [2024-06-09 23:13:24.946840] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.865 [2024-06-09 23:13:24.946848] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.865 [2024-06-09 23:13:24.946855] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.865 [2024-06-09 23:13:24.949153] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.865 [2024-06-09 23:13:24.957624] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.865 [2024-06-09 23:13:24.958354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.958896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.958910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.865 [2024-06-09 23:13:24.958920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.865 [2024-06-09 23:13:24.959045] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.865 [2024-06-09 23:13:24.959209] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.865 [2024-06-09 23:13:24.959217] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.865 [2024-06-09 23:13:24.959224] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.865 [2024-06-09 23:13:24.961380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.865 [2024-06-09 23:13:24.970179] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.865 [2024-06-09 23:13:24.970833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.971355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.971364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.865 [2024-06-09 23:13:24.971372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.865 [2024-06-09 23:13:24.971554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.865 [2024-06-09 23:13:24.971697] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.865 [2024-06-09 23:13:24.971705] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.865 [2024-06-09 23:13:24.971711] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.865 [2024-06-09 23:13:24.974018] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.865 [2024-06-09 23:13:24.982383] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.865 [2024-06-09 23:13:24.983063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.983350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.983362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.865 [2024-06-09 23:13:24.983372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.865 [2024-06-09 23:13:24.983560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.865 [2024-06-09 23:13:24.983729] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.865 [2024-06-09 23:13:24.983737] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.865 [2024-06-09 23:13:24.983744] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.865 [2024-06-09 23:13:24.986114] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.865 [2024-06-09 23:13:24.994819] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.865 [2024-06-09 23:13:24.995449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.995936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.865 [2024-06-09 23:13:24.995945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.865 [2024-06-09 23:13:24.995953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.865 [2024-06-09 23:13:24.996118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.865 [2024-06-09 23:13:24.996261] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.865 [2024-06-09 23:13:24.996269] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.865 [2024-06-09 23:13:24.996275] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.865 [2024-06-09 23:13:24.998449] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.866 [2024-06-09 23:13:25.007261] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.866 [2024-06-09 23:13:25.007920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.866 [2024-06-09 23:13:25.008385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.866 [2024-06-09 23:13:25.008395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.866 [2024-06-09 23:13:25.008430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.866 [2024-06-09 23:13:25.008557] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.866 [2024-06-09 23:13:25.008718] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.866 [2024-06-09 23:13:25.008726] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.866 [2024-06-09 23:13:25.008732] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.866 [2024-06-09 23:13:25.011043] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.866 [2024-06-09 23:13:25.019704] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.866 [2024-06-09 23:13:25.020449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.866 [2024-06-09 23:13:25.020983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.866 [2024-06-09 23:13:25.020996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.866 [2024-06-09 23:13:25.021005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.866 [2024-06-09 23:13:25.021167] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.866 [2024-06-09 23:13:25.021331] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.866 [2024-06-09 23:13:25.021343] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.866 [2024-06-09 23:13:25.021350] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.866 [2024-06-09 23:13:25.023628] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.866 [2024-06-09 23:13:25.032025] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.866 [2024-06-09 23:13:25.032782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.866 [2024-06-09 23:13:25.033316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.866 [2024-06-09 23:13:25.033329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:56.866 [2024-06-09 23:13:25.033339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:56.866 [2024-06-09 23:13:25.033510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:56.866 [2024-06-09 23:13:25.033674] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.866 [2024-06-09 23:13:25.033682] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.866 [2024-06-09 23:13:25.033689] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.866 [2024-06-09 23:13:25.036080] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.129 [2024-06-09 23:13:25.044527] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.129 [2024-06-09 23:13:25.045259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.129 [2024-06-09 23:13:25.045794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.129 [2024-06-09 23:13:25.045808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.129 [2024-06-09 23:13:25.045818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.129 [2024-06-09 23:13:25.045961] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.129 [2024-06-09 23:13:25.046052] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.129 [2024-06-09 23:13:25.046060] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.129 [2024-06-09 23:13:25.046067] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.129 [2024-06-09 23:13:25.048152] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.129 [2024-06-09 23:13:25.056916] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.129 [2024-06-09 23:13:25.057647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.129 [2024-06-09 23:13:25.058194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.129 [2024-06-09 23:13:25.058207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.129 [2024-06-09 23:13:25.058216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.129 [2024-06-09 23:13:25.058341] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.129 [2024-06-09 23:13:25.058475] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.129 [2024-06-09 23:13:25.058484] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.129 [2024-06-09 23:13:25.058496] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.129 [2024-06-09 23:13:25.060794] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.129 [2024-06-09 23:13:25.069578] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.129 [2024-06-09 23:13:25.070327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.129 [2024-06-09 23:13:25.070863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.129 [2024-06-09 23:13:25.070878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.129 [2024-06-09 23:13:25.070888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.130 [2024-06-09 23:13:25.071086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.130 [2024-06-09 23:13:25.071214] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.130 [2024-06-09 23:13:25.071222] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.130 [2024-06-09 23:13:25.071229] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.130 [2024-06-09 23:13:25.073586] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.130 [2024-06-09 23:13:25.082050] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.130 [2024-06-09 23:13:25.082782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.083200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.083213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.130 [2024-06-09 23:13:25.083222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.130 [2024-06-09 23:13:25.083409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.130 [2024-06-09 23:13:25.083556] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.130 [2024-06-09 23:13:25.083564] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.130 [2024-06-09 23:13:25.083571] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.130 [2024-06-09 23:13:25.085850] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.130 [2024-06-09 23:13:25.094526] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.130 [2024-06-09 23:13:25.095343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.095869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.095883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.130 [2024-06-09 23:13:25.095893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.130 [2024-06-09 23:13:25.096018] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.130 [2024-06-09 23:13:25.096201] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.130 [2024-06-09 23:13:25.096209] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.130 [2024-06-09 23:13:25.096216] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.130 [2024-06-09 23:13:25.098530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.130 [2024-06-09 23:13:25.107044] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.130 [2024-06-09 23:13:25.107786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.108320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.108332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.130 [2024-06-09 23:13:25.108341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.130 [2024-06-09 23:13:25.108474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.130 [2024-06-09 23:13:25.108603] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.130 [2024-06-09 23:13:25.108611] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.130 [2024-06-09 23:13:25.108618] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.130 [2024-06-09 23:13:25.110860] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.130 [2024-06-09 23:13:25.119547] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.130 [2024-06-09 23:13:25.120281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.120721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.120735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.130 [2024-06-09 23:13:25.120744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.130 [2024-06-09 23:13:25.120925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.130 [2024-06-09 23:13:25.121034] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.130 [2024-06-09 23:13:25.121042] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.130 [2024-06-09 23:13:25.121050] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.130 [2024-06-09 23:13:25.123368] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.130 [2024-06-09 23:13:25.132077] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.130 [2024-06-09 23:13:25.132838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.133371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.133383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.130 [2024-06-09 23:13:25.133393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.130 [2024-06-09 23:13:25.133598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.130 [2024-06-09 23:13:25.133727] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.130 [2024-06-09 23:13:25.133734] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.130 [2024-06-09 23:13:25.133742] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.130 [2024-06-09 23:13:25.135857] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.130 [2024-06-09 23:13:25.144687] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.130 [2024-06-09 23:13:25.145437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.146048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.146061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.130 [2024-06-09 23:13:25.146070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.130 [2024-06-09 23:13:25.146177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.130 [2024-06-09 23:13:25.146322] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.130 [2024-06-09 23:13:25.146330] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.130 [2024-06-09 23:13:25.146338] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.130 [2024-06-09 23:13:25.148606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.130 [2024-06-09 23:13:25.157069] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.130 [2024-06-09 23:13:25.157824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.158356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.158368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.130 [2024-06-09 23:13:25.158378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.130 [2024-06-09 23:13:25.158547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.130 [2024-06-09 23:13:25.158731] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.130 [2024-06-09 23:13:25.158739] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.130 [2024-06-09 23:13:25.158746] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.130 [2024-06-09 23:13:25.161099] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.130 [2024-06-09 23:13:25.169510] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.130 [2024-06-09 23:13:25.170249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.170868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.170905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.130 [2024-06-09 23:13:25.170915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.130 [2024-06-09 23:13:25.171078] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.130 [2024-06-09 23:13:25.171242] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.130 [2024-06-09 23:13:25.171250] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.130 [2024-06-09 23:13:25.171257] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.130 [2024-06-09 23:13:25.173507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.130 [2024-06-09 23:13:25.181923] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.130 [2024-06-09 23:13:25.182659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.183134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.130 [2024-06-09 23:13:25.183146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.130 [2024-06-09 23:13:25.183156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.130 [2024-06-09 23:13:25.183353] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.130 [2024-06-09 23:13:25.183507] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.130 [2024-06-09 23:13:25.183516] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.130 [2024-06-09 23:13:25.183524] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.130 [2024-06-09 23:13:25.185659] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.130 [2024-06-09 23:13:25.194428] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.131 [2024-06-09 23:13:25.195193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.195783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.195820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.131 [2024-06-09 23:13:25.195831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.131 [2024-06-09 23:13:25.195956] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.131 [2024-06-09 23:13:25.196065] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.131 [2024-06-09 23:13:25.196074] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.131 [2024-06-09 23:13:25.196081] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.131 [2024-06-09 23:13:25.198294] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.131 [2024-06-09 23:13:25.206818] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.131 [2024-06-09 23:13:25.207555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.208095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.208108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.131 [2024-06-09 23:13:25.208117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.131 [2024-06-09 23:13:25.208296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.131 [2024-06-09 23:13:25.208451] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.131 [2024-06-09 23:13:25.208460] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.131 [2024-06-09 23:13:25.208467] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.131 [2024-06-09 23:13:25.210584] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.131 [2024-06-09 23:13:25.219285] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.131 [2024-06-09 23:13:25.220120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.220740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.220781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.131 [2024-06-09 23:13:25.220792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.131 [2024-06-09 23:13:25.220935] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.131 [2024-06-09 23:13:25.221063] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.131 [2024-06-09 23:13:25.221071] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.131 [2024-06-09 23:13:25.221079] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.131 [2024-06-09 23:13:25.223236] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.131 [2024-06-09 23:13:25.231825] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.131 [2024-06-09 23:13:25.232591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.232875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.232888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.131 [2024-06-09 23:13:25.232897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.131 [2024-06-09 23:13:25.233022] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.131 [2024-06-09 23:13:25.233186] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.131 [2024-06-09 23:13:25.233194] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.131 [2024-06-09 23:13:25.233201] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.131 [2024-06-09 23:13:25.235361] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.131 [2024-06-09 23:13:25.244236] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.131 [2024-06-09 23:13:25.244982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.245343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.245356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.131 [2024-06-09 23:13:25.245365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.131 [2024-06-09 23:13:25.245518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.131 [2024-06-09 23:13:25.245646] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.131 [2024-06-09 23:13:25.245654] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.131 [2024-06-09 23:13:25.245661] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.131 [2024-06-09 23:13:25.247762] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.131 [2024-06-09 23:13:25.256705] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.131 [2024-06-09 23:13:25.257256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.257843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.257880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.131 [2024-06-09 23:13:25.257895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.131 [2024-06-09 23:13:25.258057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.131 [2024-06-09 23:13:25.258222] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.131 [2024-06-09 23:13:25.258230] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.131 [2024-06-09 23:13:25.258237] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.131 [2024-06-09 23:13:25.260596] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.131 [2024-06-09 23:13:25.269204] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.131 [2024-06-09 23:13:25.269880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.270399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.270414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.131 [2024-06-09 23:13:25.270422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.131 [2024-06-09 23:13:25.270546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.131 [2024-06-09 23:13:25.270689] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.131 [2024-06-09 23:13:25.270697] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.131 [2024-06-09 23:13:25.270703] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.131 [2024-06-09 23:13:25.273013] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.131 [2024-06-09 23:13:25.281773] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.131 [2024-06-09 23:13:25.282448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.282935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.282944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.131 [2024-06-09 23:13:25.282952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.131 [2024-06-09 23:13:25.283079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.131 [2024-06-09 23:13:25.283240] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.131 [2024-06-09 23:13:25.283247] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.131 [2024-06-09 23:13:25.283254] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.131 [2024-06-09 23:13:25.285518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.131 [2024-06-09 23:13:25.294181] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.131 [2024-06-09 23:13:25.294879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.295415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.131 [2024-06-09 23:13:25.295428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.131 [2024-06-09 23:13:25.295437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.131 [2024-06-09 23:13:25.295603] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.131 [2024-06-09 23:13:25.295712] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.131 [2024-06-09 23:13:25.295720] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.131 [2024-06-09 23:13:25.295727] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.131 [2024-06-09 23:13:25.297954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.395 [2024-06-09 23:13:25.306493] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.395 [2024-06-09 23:13:25.307138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.395 [2024-06-09 23:13:25.307728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.395 [2024-06-09 23:13:25.307765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.395 [2024-06-09 23:13:25.307775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.395 [2024-06-09 23:13:25.307900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.395 [2024-06-09 23:13:25.308082] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.395 [2024-06-09 23:13:25.308090] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.395 [2024-06-09 23:13:25.308098] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.395 [2024-06-09 23:13:25.310438] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.395 [2024-06-09 23:13:25.319046] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.395 [2024-06-09 23:13:25.319824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.395 [2024-06-09 23:13:25.320359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.395 [2024-06-09 23:13:25.320372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.395 [2024-06-09 23:13:25.320381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.395 [2024-06-09 23:13:25.320534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.395 [2024-06-09 23:13:25.320698] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.395 [2024-06-09 23:13:25.320706] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.395 [2024-06-09 23:13:25.320714] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.395 [2024-06-09 23:13:25.322831] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.395 [2024-06-09 23:13:25.331550] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.395 [2024-06-09 23:13:25.332292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.395 [2024-06-09 23:13:25.332856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.395 [2024-06-09 23:13:25.332879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.395 [2024-06-09 23:13:25.332889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.395 [2024-06-09 23:13:25.333051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.395 [2024-06-09 23:13:25.333201] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.395 [2024-06-09 23:13:25.333210] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.395 [2024-06-09 23:13:25.333218] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.395 [2024-06-09 23:13:25.335575] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.395 [2024-06-09 23:13:25.344051] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.395 [2024-06-09 23:13:25.344689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.395 [2024-06-09 23:13:25.345215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.395 [2024-06-09 23:13:25.345224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.395 [2024-06-09 23:13:25.345232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.395 [2024-06-09 23:13:25.345393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.395 [2024-06-09 23:13:25.345520] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.395 [2024-06-09 23:13:25.345528] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.395 [2024-06-09 23:13:25.345534] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.395 [2024-06-09 23:13:25.347971] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.395 [2024-06-09 23:13:25.356696] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.395 [2024-06-09 23:13:25.357346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.395 [2024-06-09 23:13:25.357975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.395 [2024-06-09 23:13:25.358012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.395 [2024-06-09 23:13:25.358022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.395 [2024-06-09 23:13:25.358185] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.395 [2024-06-09 23:13:25.358313] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.395 [2024-06-09 23:13:25.358321] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.395 [2024-06-09 23:13:25.358329] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.395 [2024-06-09 23:13:25.360435] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.395 [2024-06-09 23:13:25.369142] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.395 [2024-06-09 23:13:25.369751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.395 [2024-06-09 23:13:25.370169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.395 [2024-06-09 23:13:25.370183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.395 [2024-06-09 23:13:25.370192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.395 [2024-06-09 23:13:25.370391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.395 [2024-06-09 23:13:25.370561] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.395 [2024-06-09 23:13:25.370574] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.395 [2024-06-09 23:13:25.370582] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.396 [2024-06-09 23:13:25.372735] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.396 [2024-06-09 23:13:25.381594] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.396 [2024-06-09 23:13:25.382235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.382751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.382788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.396 [2024-06-09 23:13:25.382799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.396 [2024-06-09 23:13:25.382942] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.396 [2024-06-09 23:13:25.383091] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.396 [2024-06-09 23:13:25.383099] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.396 [2024-06-09 23:13:25.383106] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.396 [2024-06-09 23:13:25.385386] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.396 [2024-06-09 23:13:25.394002] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.396 [2024-06-09 23:13:25.394747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.395249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.395262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.396 [2024-06-09 23:13:25.395271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.396 [2024-06-09 23:13:25.395421] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.396 [2024-06-09 23:13:25.395567] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.396 [2024-06-09 23:13:25.395575] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.396 [2024-06-09 23:13:25.395582] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.396 [2024-06-09 23:13:25.398023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.396 [2024-06-09 23:13:25.406392] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.396 [2024-06-09 23:13:25.406775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.407295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.407305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.396 [2024-06-09 23:13:25.407313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.396 [2024-06-09 23:13:25.407464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.396 [2024-06-09 23:13:25.407608] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.396 [2024-06-09 23:13:25.407615] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.396 [2024-06-09 23:13:25.407626] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.396 [2024-06-09 23:13:25.409872] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.396 [2024-06-09 23:13:25.418842] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.396 [2024-06-09 23:13:25.419653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.419928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.419948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.396 [2024-06-09 23:13:25.419958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.396 [2024-06-09 23:13:25.420102] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.396 [2024-06-09 23:13:25.420248] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.396 [2024-06-09 23:13:25.420256] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.396 [2024-06-09 23:13:25.420263] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.396 [2024-06-09 23:13:25.422485] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.396 [2024-06-09 23:13:25.431330] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.396 [2024-06-09 23:13:25.432109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.432644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.432658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.396 [2024-06-09 23:13:25.432667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.396 [2024-06-09 23:13:25.432865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.396 [2024-06-09 23:13:25.432993] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.396 [2024-06-09 23:13:25.433001] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.396 [2024-06-09 23:13:25.433008] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.396 [2024-06-09 23:13:25.435383] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.396 [2024-06-09 23:13:25.443824] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.396 [2024-06-09 23:13:25.444488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.444976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.444985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.396 [2024-06-09 23:13:25.444993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.396 [2024-06-09 23:13:25.445172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.396 [2024-06-09 23:13:25.445261] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.396 [2024-06-09 23:13:25.445268] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.396 [2024-06-09 23:13:25.445275] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.396 [2024-06-09 23:13:25.447521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.396 [2024-06-09 23:13:25.456117] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.396 [2024-06-09 23:13:25.456755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.457238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.457248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.396 [2024-06-09 23:13:25.457255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.396 [2024-06-09 23:13:25.457343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.396 [2024-06-09 23:13:25.457508] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.396 [2024-06-09 23:13:25.457516] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.396 [2024-06-09 23:13:25.457523] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.396 [2024-06-09 23:13:25.459797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.396 [2024-06-09 23:13:25.468634] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.396 [2024-06-09 23:13:25.469334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.469896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.469906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.396 [2024-06-09 23:13:25.469913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.396 [2024-06-09 23:13:25.470110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.396 [2024-06-09 23:13:25.470215] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.396 [2024-06-09 23:13:25.470222] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.396 [2024-06-09 23:13:25.470229] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.396 [2024-06-09 23:13:25.472395] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.396 [2024-06-09 23:13:25.481400] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.396 [2024-06-09 23:13:25.482077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.482717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.482753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.396 [2024-06-09 23:13:25.482764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.396 [2024-06-09 23:13:25.482927] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.396 [2024-06-09 23:13:25.483054] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.396 [2024-06-09 23:13:25.483063] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.396 [2024-06-09 23:13:25.483070] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.396 [2024-06-09 23:13:25.485341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.396 [2024-06-09 23:13:25.493979] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.396 [2024-06-09 23:13:25.494661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.495169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.396 [2024-06-09 23:13:25.495182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.396 [2024-06-09 23:13:25.495191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.397 [2024-06-09 23:13:25.495353] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.397 [2024-06-09 23:13:25.495487] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.397 [2024-06-09 23:13:25.495497] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.397 [2024-06-09 23:13:25.495504] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.397 [2024-06-09 23:13:25.497746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.397 [2024-06-09 23:13:25.506652] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.397 [2024-06-09 23:13:25.507170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.397 [2024-06-09 23:13:25.507770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.397 [2024-06-09 23:13:25.507807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.397 [2024-06-09 23:13:25.507818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.397 [2024-06-09 23:13:25.507962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.397 [2024-06-09 23:13:25.508107] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.397 [2024-06-09 23:13:25.508115] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.397 [2024-06-09 23:13:25.508122] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.397 [2024-06-09 23:13:25.510431] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.397 [2024-06-09 23:13:25.519076] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.397 [2024-06-09 23:13:25.519693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.397 [2024-06-09 23:13:25.520229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.397 [2024-06-09 23:13:25.520242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.397 [2024-06-09 23:13:25.520251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.397 [2024-06-09 23:13:25.520458] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.397 [2024-06-09 23:13:25.520569] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.397 [2024-06-09 23:13:25.520576] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.397 [2024-06-09 23:13:25.520584] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.397 [2024-06-09 23:13:25.523012] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.397 [2024-06-09 23:13:25.531646] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.397 [2024-06-09 23:13:25.532297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.397 [2024-06-09 23:13:25.532641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.397 [2024-06-09 23:13:25.532678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.397 [2024-06-09 23:13:25.532688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.397 [2024-06-09 23:13:25.532832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.397 [2024-06-09 23:13:25.532960] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.397 [2024-06-09 23:13:25.532968] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.397 [2024-06-09 23:13:25.532975] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.397 [2024-06-09 23:13:25.534992] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.397 [2024-06-09 23:13:25.543929] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.397 [2024-06-09 23:13:25.544513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.397 [2024-06-09 23:13:25.545069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.397 [2024-06-09 23:13:25.545082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.397 [2024-06-09 23:13:25.545091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.397 [2024-06-09 23:13:25.545271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.397 [2024-06-09 23:13:25.545444] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.397 [2024-06-09 23:13:25.545453] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.397 [2024-06-09 23:13:25.545460] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.397 [2024-06-09 23:13:25.547757] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.397 [2024-06-09 23:13:25.556429] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.397 [2024-06-09 23:13:25.557104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.397 [2024-06-09 23:13:25.557685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.397 [2024-06-09 23:13:25.557722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.397 [2024-06-09 23:13:25.557732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.397 [2024-06-09 23:13:25.557894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.397 [2024-06-09 23:13:25.558059] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.397 [2024-06-09 23:13:25.558067] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.397 [2024-06-09 23:13:25.558074] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.397 [2024-06-09 23:13:25.560472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.397 [2024-06-09 23:13:25.569047] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.397 [2024-06-09 23:13:25.569789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.397 [2024-06-09 23:13:25.570323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.397 [2024-06-09 23:13:25.570336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.397 [2024-06-09 23:13:25.570350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.397 [2024-06-09 23:13:25.570519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.397 [2024-06-09 23:13:25.570665] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.397 [2024-06-09 23:13:25.570674] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.397 [2024-06-09 23:13:25.570681] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.660 [2024-06-09 23:13:25.572688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.660 [2024-06-09 23:13:25.581568] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.660 [2024-06-09 23:13:25.582126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.660 [2024-06-09 23:13:25.582660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.660 [2024-06-09 23:13:25.582676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.660 [2024-06-09 23:13:25.582685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.660 [2024-06-09 23:13:25.582884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.661 [2024-06-09 23:13:25.583048] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.661 [2024-06-09 23:13:25.583057] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.661 [2024-06-09 23:13:25.583064] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.661 [2024-06-09 23:13:25.585291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.661 [2024-06-09 23:13:25.594004] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.661 [2024-06-09 23:13:25.594677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.595196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.595206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.661 [2024-06-09 23:13:25.595213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.661 [2024-06-09 23:13:25.595319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.661 [2024-06-09 23:13:25.595446] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.661 [2024-06-09 23:13:25.595454] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.661 [2024-06-09 23:13:25.595461] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.661 [2024-06-09 23:13:25.597840] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.661 [2024-06-09 23:13:25.606529] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.661 [2024-06-09 23:13:25.607186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.607790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.607827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.661 [2024-06-09 23:13:25.607838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.661 [2024-06-09 23:13:25.608005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.661 [2024-06-09 23:13:25.608133] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.661 [2024-06-09 23:13:25.608141] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.661 [2024-06-09 23:13:25.608148] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.661 [2024-06-09 23:13:25.610452] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.661 [2024-06-09 23:13:25.618959] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.661 [2024-06-09 23:13:25.619632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.620178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.620187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.661 [2024-06-09 23:13:25.620195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.661 [2024-06-09 23:13:25.620356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.661 [2024-06-09 23:13:25.620464] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.661 [2024-06-09 23:13:25.620472] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.661 [2024-06-09 23:13:25.620479] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.661 [2024-06-09 23:13:25.622663] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.661 [2024-06-09 23:13:25.631263] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.661 [2024-06-09 23:13:25.631814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.632337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.632347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.661 [2024-06-09 23:13:25.632354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.661 [2024-06-09 23:13:25.632536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.661 [2024-06-09 23:13:25.632680] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.661 [2024-06-09 23:13:25.632687] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.661 [2024-06-09 23:13:25.632694] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.661 [2024-06-09 23:13:25.634702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.661 [2024-06-09 23:13:25.643818] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.661 [2024-06-09 23:13:25.644574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.645116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.645129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.661 [2024-06-09 23:13:25.645138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.661 [2024-06-09 23:13:25.645300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.661 [2024-06-09 23:13:25.645438] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.661 [2024-06-09 23:13:25.645447] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.661 [2024-06-09 23:13:25.645454] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.661 [2024-06-09 23:13:25.647753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.661 [2024-06-09 23:13:25.656314] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.661 [2024-06-09 23:13:25.657059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.657677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.657714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.661 [2024-06-09 23:13:25.657725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.661 [2024-06-09 23:13:25.657868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.661 [2024-06-09 23:13:25.658032] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.661 [2024-06-09 23:13:25.658040] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.661 [2024-06-09 23:13:25.658047] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.661 [2024-06-09 23:13:25.660385] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.661 [2024-06-09 23:13:25.668936] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.661 [2024-06-09 23:13:25.669475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.670001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.670011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.661 [2024-06-09 23:13:25.670019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.661 [2024-06-09 23:13:25.670163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.661 [2024-06-09 23:13:25.670287] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.661 [2024-06-09 23:13:25.670295] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.661 [2024-06-09 23:13:25.670301] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.661 [2024-06-09 23:13:25.672435] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.661 [2024-06-09 23:13:25.681153] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.661 [2024-06-09 23:13:25.681802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.682345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.682358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.661 [2024-06-09 23:13:25.682367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.661 [2024-06-09 23:13:25.682570] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.661 [2024-06-09 23:13:25.682698] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.661 [2024-06-09 23:13:25.682711] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.661 [2024-06-09 23:13:25.682719] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.661 [2024-06-09 23:13:25.684873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.661 [2024-06-09 23:13:25.693414] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.661 [2024-06-09 23:13:25.694037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.694649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.661 [2024-06-09 23:13:25.694686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.661 [2024-06-09 23:13:25.694697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.661 [2024-06-09 23:13:25.694804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.661 [2024-06-09 23:13:25.694913] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.661 [2024-06-09 23:13:25.694921] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.661 [2024-06-09 23:13:25.694929] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.662 [2024-06-09 23:13:25.697087] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.662 [2024-06-09 23:13:25.705880] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.662 [2024-06-09 23:13:25.706658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.707196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.707209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.662 [2024-06-09 23:13:25.707218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.662 [2024-06-09 23:13:25.707398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.662 [2024-06-09 23:13:25.707551] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.662 [2024-06-09 23:13:25.707559] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.662 [2024-06-09 23:13:25.707566] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.662 [2024-06-09 23:13:25.709737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.662 [2024-06-09 23:13:25.718363] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.662 [2024-06-09 23:13:25.719027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.719639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.719676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.662 [2024-06-09 23:13:25.719688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.662 [2024-06-09 23:13:25.719797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.662 [2024-06-09 23:13:25.719961] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.662 [2024-06-09 23:13:25.719969] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.662 [2024-06-09 23:13:25.719981] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.662 [2024-06-09 23:13:25.722181] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.662 [2024-06-09 23:13:25.730631] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.662 [2024-06-09 23:13:25.731389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.731770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.731783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.662 [2024-06-09 23:13:25.731792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.662 [2024-06-09 23:13:25.731972] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.662 [2024-06-09 23:13:25.732155] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.662 [2024-06-09 23:13:25.732163] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.662 [2024-06-09 23:13:25.732170] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.662 [2024-06-09 23:13:25.734398] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.662 [2024-06-09 23:13:25.743064] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.662 [2024-06-09 23:13:25.743787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.744140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.744152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.662 [2024-06-09 23:13:25.744162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.662 [2024-06-09 23:13:25.744286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.662 [2024-06-09 23:13:25.744421] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.662 [2024-06-09 23:13:25.744430] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.662 [2024-06-09 23:13:25.744437] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.662 [2024-06-09 23:13:25.746681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.662 [2024-06-09 23:13:25.755748] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.662 [2024-06-09 23:13:25.756495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.756874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.756886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.662 [2024-06-09 23:13:25.756895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.662 [2024-06-09 23:13:25.757039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.662 [2024-06-09 23:13:25.757239] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.662 [2024-06-09 23:13:25.757247] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.662 [2024-06-09 23:13:25.757254] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.662 [2024-06-09 23:13:25.759475] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.662 [2024-06-09 23:13:25.768231] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.662 [2024-06-09 23:13:25.768862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.769380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.769390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.662 [2024-06-09 23:13:25.769397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.662 [2024-06-09 23:13:25.769544] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.662 [2024-06-09 23:13:25.769723] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.662 [2024-06-09 23:13:25.769730] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.662 [2024-06-09 23:13:25.769737] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.662 [2024-06-09 23:13:25.771739] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.662 [2024-06-09 23:13:25.780629] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.662 [2024-06-09 23:13:25.781370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.781895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.781908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.662 [2024-06-09 23:13:25.781918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.662 [2024-06-09 23:13:25.782097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.662 [2024-06-09 23:13:25.782244] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.662 [2024-06-09 23:13:25.782252] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.662 [2024-06-09 23:13:25.782259] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.662 [2024-06-09 23:13:25.784325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.662 [2024-06-09 23:13:25.793221] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.662 [2024-06-09 23:13:25.794009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.794665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.794702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.662 [2024-06-09 23:13:25.794712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.662 [2024-06-09 23:13:25.794838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.662 [2024-06-09 23:13:25.795020] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.662 [2024-06-09 23:13:25.795028] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.662 [2024-06-09 23:13:25.795035] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.662 [2024-06-09 23:13:25.797268] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.662 [2024-06-09 23:13:25.805818] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.662 [2024-06-09 23:13:25.806596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.807133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.807146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.662 [2024-06-09 23:13:25.807156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.662 [2024-06-09 23:13:25.807318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.662 [2024-06-09 23:13:25.807488] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.662 [2024-06-09 23:13:25.807497] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.662 [2024-06-09 23:13:25.807505] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.662 [2024-06-09 23:13:25.809843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.662 [2024-06-09 23:13:25.818451] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.662 [2024-06-09 23:13:25.819139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.819609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.662 [2024-06-09 23:13:25.819647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.662 [2024-06-09 23:13:25.819658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.663 [2024-06-09 23:13:25.819857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.663 [2024-06-09 23:13:25.819985] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.663 [2024-06-09 23:13:25.819993] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.663 [2024-06-09 23:13:25.820000] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.663 [2024-06-09 23:13:25.822053] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.663 [2024-06-09 23:13:25.831006] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.663 [2024-06-09 23:13:25.831696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.663 [2024-06-09 23:13:25.832193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.663 [2024-06-09 23:13:25.832206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.663 [2024-06-09 23:13:25.832215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.663 [2024-06-09 23:13:25.832395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.663 [2024-06-09 23:13:25.832568] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.663 [2024-06-09 23:13:25.832577] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.663 [2024-06-09 23:13:25.832584] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.663 [2024-06-09 23:13:25.834920] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.926 [2024-06-09 23:13:25.843603] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.926 [2024-06-09 23:13:25.844257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.926 [2024-06-09 23:13:25.844855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.926 [2024-06-09 23:13:25.844892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.926 [2024-06-09 23:13:25.844902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.926 [2024-06-09 23:13:25.845065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.926 [2024-06-09 23:13:25.845229] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.926 [2024-06-09 23:13:25.845238] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.926 [2024-06-09 23:13:25.845245] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.926 [2024-06-09 23:13:25.847444] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.926 [2024-06-09 23:13:25.855911] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.926 [2024-06-09 23:13:25.856670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.926 [2024-06-09 23:13:25.857208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.926 [2024-06-09 23:13:25.857221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.926 [2024-06-09 23:13:25.857230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.926 [2024-06-09 23:13:25.857417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.926 [2024-06-09 23:13:25.857564] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.926 [2024-06-09 23:13:25.857572] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.926 [2024-06-09 23:13:25.857579] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.926 [2024-06-09 23:13:25.859790] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.926 [2024-06-09 23:13:25.868365] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.926 [2024-06-09 23:13:25.869024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.926 [2024-06-09 23:13:25.869595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.926 [2024-06-09 23:13:25.869632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.926 [2024-06-09 23:13:25.869644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.926 [2024-06-09 23:13:25.869790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.926 [2024-06-09 23:13:25.869954] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.926 [2024-06-09 23:13:25.869962] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.926 [2024-06-09 23:13:25.869970] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.926 [2024-06-09 23:13:25.872454] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.926 [2024-06-09 23:13:25.880730] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.926 [2024-06-09 23:13:25.881415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.926 [2024-06-09 23:13:25.881933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.926 [2024-06-09 23:13:25.881948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.926 [2024-06-09 23:13:25.881956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.926 [2024-06-09 23:13:25.882098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.926 [2024-06-09 23:13:25.882240] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.926 [2024-06-09 23:13:25.882248] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.926 [2024-06-09 23:13:25.882255] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.926 [2024-06-09 23:13:25.884415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.926 [2024-06-09 23:13:25.893001] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.926 [2024-06-09 23:13:25.893775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.926 [2024-06-09 23:13:25.894317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.926 [2024-06-09 23:13:25.894330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.926 [2024-06-09 23:13:25.894339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.926 [2024-06-09 23:13:25.894509] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.926 [2024-06-09 23:13:25.894711] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.926 [2024-06-09 23:13:25.894719] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.926 [2024-06-09 23:13:25.894726] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.926 [2024-06-09 23:13:25.897007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.926 [2024-06-09 23:13:25.905436] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.926 [2024-06-09 23:13:25.906205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.926 [2024-06-09 23:13:25.906845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.926 [2024-06-09 23:13:25.906882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.927 [2024-06-09 23:13:25.906893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.927 [2024-06-09 23:13:25.907055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.927 [2024-06-09 23:13:25.907183] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.927 [2024-06-09 23:13:25.907191] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.927 [2024-06-09 23:13:25.907198] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.927 [2024-06-09 23:13:25.909557] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.927 [2024-06-09 23:13:25.917766] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.927 [2024-06-09 23:13:25.918311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:25.918934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:25.918972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.927 [2024-06-09 23:13:25.918988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.927 [2024-06-09 23:13:25.919150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.927 [2024-06-09 23:13:25.919351] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.927 [2024-06-09 23:13:25.919359] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.927 [2024-06-09 23:13:25.919366] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.927 [2024-06-09 23:13:25.921527] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.927 [2024-06-09 23:13:25.930176] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.927 [2024-06-09 23:13:25.930889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:25.931461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:25.931476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.927 [2024-06-09 23:13:25.931485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.927 [2024-06-09 23:13:25.931666] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.927 [2024-06-09 23:13:25.931794] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.927 [2024-06-09 23:13:25.931802] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.927 [2024-06-09 23:13:25.931809] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.927 [2024-06-09 23:13:25.934126] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.927 [2024-06-09 23:13:25.942728] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.927 [2024-06-09 23:13:25.943347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:25.943740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:25.943777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.927 [2024-06-09 23:13:25.943788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.927 [2024-06-09 23:13:25.943913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.927 [2024-06-09 23:13:25.944040] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.927 [2024-06-09 23:13:25.944049] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.927 [2024-06-09 23:13:25.944056] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.927 [2024-06-09 23:13:25.946250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.927 [2024-06-09 23:13:25.955035] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.927 [2024-06-09 23:13:25.955782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:25.956321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:25.956333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.927 [2024-06-09 23:13:25.956343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.927 [2024-06-09 23:13:25.956515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.927 [2024-06-09 23:13:25.956661] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.927 [2024-06-09 23:13:25.956670] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.927 [2024-06-09 23:13:25.956677] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.927 [2024-06-09 23:13:25.959028] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.927 [2024-06-09 23:13:25.967487] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.927 [2024-06-09 23:13:25.968253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:25.968797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:25.968812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.927 [2024-06-09 23:13:25.968822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.927 [2024-06-09 23:13:25.968984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.927 [2024-06-09 23:13:25.969092] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.927 [2024-06-09 23:13:25.969100] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.927 [2024-06-09 23:13:25.969108] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.927 [2024-06-09 23:13:25.971300] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.927 [2024-06-09 23:13:25.979859] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.927 [2024-06-09 23:13:25.980669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:25.980948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:25.980967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.927 [2024-06-09 23:13:25.980977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.927 [2024-06-09 23:13:25.981084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.927 [2024-06-09 23:13:25.981192] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.927 [2024-06-09 23:13:25.981201] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.927 [2024-06-09 23:13:25.981208] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.927 [2024-06-09 23:13:25.983603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.927 [2024-06-09 23:13:25.992377] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.927 [2024-06-09 23:13:25.993025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:25.993649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:25.993686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.927 [2024-06-09 23:13:25.993697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.927 [2024-06-09 23:13:25.993859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.927 [2024-06-09 23:13:25.993992] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.927 [2024-06-09 23:13:25.994000] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.927 [2024-06-09 23:13:25.994008] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.927 [2024-06-09 23:13:25.996308] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.927 [2024-06-09 23:13:26.004868] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.927 [2024-06-09 23:13:26.005555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:26.006003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:26.006015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.927 [2024-06-09 23:13:26.006023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.927 [2024-06-09 23:13:26.006148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.927 [2024-06-09 23:13:26.006309] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.927 [2024-06-09 23:13:26.006316] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.927 [2024-06-09 23:13:26.006323] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.927 [2024-06-09 23:13:26.008568] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.927 [2024-06-09 23:13:26.017113] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.927 [2024-06-09 23:13:26.017590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:26.018116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.927 [2024-06-09 23:13:26.018126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.927 [2024-06-09 23:13:26.018133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.927 [2024-06-09 23:13:26.018276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.927 [2024-06-09 23:13:26.018440] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.927 [2024-06-09 23:13:26.018449] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.927 [2024-06-09 23:13:26.018455] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.927 [2024-06-09 23:13:26.020747] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.928 [2024-06-09 23:13:26.029472] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.928 [2024-06-09 23:13:26.030096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.928 [2024-06-09 23:13:26.030672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.928 [2024-06-09 23:13:26.030709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.928 [2024-06-09 23:13:26.030720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.928 [2024-06-09 23:13:26.030863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.928 [2024-06-09 23:13:26.030973] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.928 [2024-06-09 23:13:26.030989] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.928 [2024-06-09 23:13:26.030997] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.928 [2024-06-09 23:13:26.033436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.928 [2024-06-09 23:13:26.041949] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.928 [2024-06-09 23:13:26.042675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.928 [2024-06-09 23:13:26.043195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.928 [2024-06-09 23:13:26.043205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.928 [2024-06-09 23:13:26.043212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.928 [2024-06-09 23:13:26.043337] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.928 [2024-06-09 23:13:26.043483] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.928 [2024-06-09 23:13:26.043493] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.928 [2024-06-09 23:13:26.043500] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.928 [2024-06-09 23:13:26.045558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.928 [2024-06-09 23:13:26.054502] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.928 [2024-06-09 23:13:26.055212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.928 [2024-06-09 23:13:26.055741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.928 [2024-06-09 23:13:26.055779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.928 [2024-06-09 23:13:26.055790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.928 [2024-06-09 23:13:26.055952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.928 [2024-06-09 23:13:26.056079] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.928 [2024-06-09 23:13:26.056088] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.928 [2024-06-09 23:13:26.056095] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.928 [2024-06-09 23:13:26.058252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.928 [2024-06-09 23:13:26.066867] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.928 [2024-06-09 23:13:26.067638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.928 [2024-06-09 23:13:26.068059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.928 [2024-06-09 23:13:26.068072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.928 [2024-06-09 23:13:26.068082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.928 [2024-06-09 23:13:26.068225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.928 [2024-06-09 23:13:26.068431] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.928 [2024-06-09 23:13:26.068440] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.928 [2024-06-09 23:13:26.068451] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.928 [2024-06-09 23:13:26.070952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.928 [2024-06-09 23:13:26.079430] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.928 [2024-06-09 23:13:26.080082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.928 [2024-06-09 23:13:26.080720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.928 [2024-06-09 23:13:26.080757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.928 [2024-06-09 23:13:26.080768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.928 [2024-06-09 23:13:26.080930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.928 [2024-06-09 23:13:26.081095] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.928 [2024-06-09 23:13:26.081103] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.928 [2024-06-09 23:13:26.081110] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.928 [2024-06-09 23:13:26.083234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.928 [2024-06-09 23:13:26.091735] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.928 [2024-06-09 23:13:26.092490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.928 [2024-06-09 23:13:26.092877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.928 [2024-06-09 23:13:26.092890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:57.928 [2024-06-09 23:13:26.092899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:57.928 [2024-06-09 23:13:26.093080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:57.928 [2024-06-09 23:13:26.093244] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.928 [2024-06-09 23:13:26.093252] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.928 [2024-06-09 23:13:26.093261] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.928 [2024-06-09 23:13:26.095641] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.192 [2024-06-09 23:13:26.104153] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.192 [2024-06-09 23:13:26.104914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.192 [2024-06-09 23:13:26.105179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.192 [2024-06-09 23:13:26.105192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.192 [2024-06-09 23:13:26.105202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.192 [2024-06-09 23:13:26.105382] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.192 [2024-06-09 23:13:26.105497] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.192 [2024-06-09 23:13:26.105505] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.192 [2024-06-09 23:13:26.105513] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.192 [2024-06-09 23:13:26.107700] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.192 [2024-06-09 23:13:26.116767] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.192 [2024-06-09 23:13:26.117612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.192 [2024-06-09 23:13:26.118155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.192 [2024-06-09 23:13:26.118167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.192 [2024-06-09 23:13:26.118177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.192 [2024-06-09 23:13:26.118320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.192 [2024-06-09 23:13:26.118472] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.192 [2024-06-09 23:13:26.118481] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.192 [2024-06-09 23:13:26.118489] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.192 [2024-06-09 23:13:26.120949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.192 [2024-06-09 23:13:26.129147] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.192 [2024-06-09 23:13:26.129948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.192 [2024-06-09 23:13:26.130621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.192 [2024-06-09 23:13:26.130658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.192 [2024-06-09 23:13:26.130668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.192 [2024-06-09 23:13:26.130831] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.192 [2024-06-09 23:13:26.130977] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.192 [2024-06-09 23:13:26.130985] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.192 [2024-06-09 23:13:26.130993] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.192 [2024-06-09 23:13:26.133113] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.192 [2024-06-09 23:13:26.141743] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.192 [2024-06-09 23:13:26.142487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.192 [2024-06-09 23:13:26.143045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.192 [2024-06-09 23:13:26.143058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.192 [2024-06-09 23:13:26.143068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.192 [2024-06-09 23:13:26.143248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.192 [2024-06-09 23:13:26.143377] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.192 [2024-06-09 23:13:26.143386] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.192 [2024-06-09 23:13:26.143395] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.192 [2024-06-09 23:13:26.145701] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.192 [2024-06-09 23:13:26.154348] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.192 [2024-06-09 23:13:26.155137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.192 [2024-06-09 23:13:26.155677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.192 [2024-06-09 23:13:26.155692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.192 [2024-06-09 23:13:26.155701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.192 [2024-06-09 23:13:26.155863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.192 [2024-06-09 23:13:26.155991] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.192 [2024-06-09 23:13:26.155999] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.192 [2024-06-09 23:13:26.156006] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.192 [2024-06-09 23:13:26.158144] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.192 [2024-06-09 23:13:26.166841] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.192 [2024-06-09 23:13:26.167481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.192 [2024-06-09 23:13:26.168021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.192 [2024-06-09 23:13:26.168034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.192 [2024-06-09 23:13:26.168043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.192 [2024-06-09 23:13:26.168260] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.192 [2024-06-09 23:13:26.168388] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.192 [2024-06-09 23:13:26.168395] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.192 [2024-06-09 23:13:26.168411] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.192 [2024-06-09 23:13:26.170690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 102723 Killed "${NVMF_APP[@]}" "$@" 00:30:58.192 23:13:26 -- host/bdevperf.sh@36 -- # tgt_init 00:30:58.192 23:13:26 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:58.192 23:13:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:58.192 23:13:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:58.192 23:13:26 -- common/autotest_common.sh@10 -- # set +x 00:30:58.192 [2024-06-09 23:13:26.179115] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.192 [2024-06-09 23:13:26.179691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.192 [2024-06-09 23:13:26.180213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.192 [2024-06-09 23:13:26.180223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.192 [2024-06-09 23:13:26.180230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.192 [2024-06-09 23:13:26.180432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.192 [2024-06-09 23:13:26.180557] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.192 [2024-06-09 23:13:26.180564] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.192 [2024-06-09 23:13:26.180571] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.192 [2024-06-09 23:13:26.182633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.192 23:13:26 -- nvmf/common.sh@469 -- # nvmfpid=104449 00:30:58.192 23:13:26 -- nvmf/common.sh@470 -- # waitforlisten 104449 00:30:58.192 23:13:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:58.192 23:13:26 -- common/autotest_common.sh@819 -- # '[' -z 104449 ']' 00:30:58.192 23:13:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.192 23:13:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:58.192 23:13:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.193 23:13:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:58.193 23:13:26 -- common/autotest_common.sh@10 -- # set +x 00:30:58.193 [2024-06-09 23:13:26.191679] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.193 [2024-06-09 23:13:26.192362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.192891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.192902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.193 [2024-06-09 23:13:26.192909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.193 [2024-06-09 23:13:26.193069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.193 [2024-06-09 23:13:26.193229] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.193 [2024-06-09 23:13:26.193237] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.193 [2024-06-09 23:13:26.193244] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.193 [2024-06-09 23:13:26.195559] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.193 [2024-06-09 23:13:26.204179] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.193 [2024-06-09 23:13:26.204902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.205611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.205648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.193 [2024-06-09 23:13:26.205659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.193 [2024-06-09 23:13:26.205840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.193 [2024-06-09 23:13:26.205950] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.193 [2024-06-09 23:13:26.205957] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.193 [2024-06-09 23:13:26.205965] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.193 [2024-06-09 23:13:26.208177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.193 [2024-06-09 23:13:26.216593] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.193 [2024-06-09 23:13:26.217270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.217881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.217918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.193 [2024-06-09 23:13:26.217934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.193 [2024-06-09 23:13:26.218059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.193 [2024-06-09 23:13:26.218205] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.193 [2024-06-09 23:13:26.218213] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.193 [2024-06-09 23:13:26.218221] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.193 [2024-06-09 23:13:26.220506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.193 [2024-06-09 23:13:26.229130] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.193 [2024-06-09 23:13:26.229810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.230077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.230087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.193 [2024-06-09 23:13:26.230095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.193 [2024-06-09 23:13:26.230238] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.193 [2024-06-09 23:13:26.230439] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.193 [2024-06-09 23:13:26.230449] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.193 [2024-06-09 23:13:26.230456] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.193 [2024-06-09 23:13:26.230870] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:30:58.193 [2024-06-09 23:13:26.230921] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.193 [2024-06-09 23:13:26.232639] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.193 [2024-06-09 23:13:26.241576] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.193 [2024-06-09 23:13:26.241931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.242423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.242434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.193 [2024-06-09 23:13:26.242441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.193 [2024-06-09 23:13:26.242602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.193 [2024-06-09 23:13:26.242781] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.193 [2024-06-09 23:13:26.242789] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.193 [2024-06-09 23:13:26.242796] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.193 [2024-06-09 23:13:26.245088] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.193 [2024-06-09 23:13:26.254013] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.193 [2024-06-09 23:13:26.254646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.255188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.255197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.193 [2024-06-09 23:13:26.255205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.193 [2024-06-09 23:13:26.255293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.193 [2024-06-09 23:13:26.255422] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.193 [2024-06-09 23:13:26.255431] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.193 [2024-06-09 23:13:26.255438] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.193 [2024-06-09 23:13:26.257708] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.193 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.193 [2024-06-09 23:13:26.266495] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.193 [2024-06-09 23:13:26.267285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.267830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.267845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.193 [2024-06-09 23:13:26.267854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.193 [2024-06-09 23:13:26.267999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.193 [2024-06-09 23:13:26.268181] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.193 [2024-06-09 23:13:26.268190] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.193 [2024-06-09 23:13:26.268197] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.193 [2024-06-09 23:13:26.270389] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.193 [2024-06-09 23:13:26.279005] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.193 [2024-06-09 23:13:26.279731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.280246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.280259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.193 [2024-06-09 23:13:26.280269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.193 [2024-06-09 23:13:26.280439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.193 [2024-06-09 23:13:26.280640] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.193 [2024-06-09 23:13:26.280648] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.193 [2024-06-09 23:13:26.280656] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.193 [2024-06-09 23:13:26.283008] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.193 [2024-06-09 23:13:26.291926] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.193 [2024-06-09 23:13:26.292506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.292895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.193 [2024-06-09 23:13:26.292913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.193 [2024-06-09 23:13:26.292922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.193 [2024-06-09 23:13:26.293084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.193 [2024-06-09 23:13:26.293248] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.193 [2024-06-09 23:13:26.293257] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.193 [2024-06-09 23:13:26.293264] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.193 [2024-06-09 23:13:26.295625] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.193 [2024-06-09 23:13:26.299549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:58.194 [2024-06-09 23:13:26.304411] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.194 [2024-06-09 23:13:26.305242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.194 [2024-06-09 23:13:26.305670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.194 [2024-06-09 23:13:26.305686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.194 [2024-06-09 23:13:26.305695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.194 [2024-06-09 23:13:26.305895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.194 [2024-06-09 23:13:26.306059] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.194 [2024-06-09 23:13:26.306068] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.194 [2024-06-09 23:13:26.306075] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.194 [2024-06-09 23:13:26.308266] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.194 [2024-06-09 23:13:26.316897] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.194 [2024-06-09 23:13:26.317703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.194 [2024-06-09 23:13:26.318224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.194 [2024-06-09 23:13:26.318237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.194 [2024-06-09 23:13:26.318247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.194 [2024-06-09 23:13:26.318373] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.194 [2024-06-09 23:13:26.318526] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.194 [2024-06-09 23:13:26.318535] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.194 [2024-06-09 23:13:26.318543] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.194 [2024-06-09 23:13:26.320843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.194 [2024-06-09 23:13:26.329417] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.194 [2024-06-09 23:13:26.330176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.194 [2024-06-09 23:13:26.330701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.194 [2024-06-09 23:13:26.330716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.194 [2024-06-09 23:13:26.330730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.194 [2024-06-09 23:13:26.330875] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.194 [2024-06-09 23:13:26.331021] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.194 [2024-06-09 23:13:26.331029] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.194 [2024-06-09 23:13:26.331036] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.194 [2024-06-09 23:13:26.333416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.194 [2024-06-09 23:13:26.341907] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.194 [2024-06-09 23:13:26.342687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.194 [2024-06-09 23:13:26.343198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.194 [2024-06-09 23:13:26.343213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.194 [2024-06-09 23:13:26.343222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.194 [2024-06-09 23:13:26.343410] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.194 [2024-06-09 23:13:26.343576] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.194 [2024-06-09 23:13:26.343584] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.194 [2024-06-09 23:13:26.343591] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.194 [2024-06-09 23:13:26.345785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.194 [2024-06-09 23:13:26.354472] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.194 [2024-06-09 23:13:26.355062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.194 [2024-06-09 23:13:26.355211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.194 [2024-06-09 23:13:26.355224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.194 [2024-06-09 23:13:26.355233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.194 [2024-06-09 23:13:26.355396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.194 [2024-06-09 23:13:26.355605] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.194 [2024-06-09 23:13:26.355614] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.194 [2024-06-09 23:13:26.355621] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.194 [2024-06-09 23:13:26.357902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.194 [2024-06-09 23:13:26.361562] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:58.194 [2024-06-09 23:13:26.361674] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:58.194 [2024-06-09 23:13:26.361682] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:58.194 [2024-06-09 23:13:26.361689] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:58.194 [2024-06-09 23:13:26.361730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:58.194 [2024-06-09 23:13:26.362113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:58.194 [2024-06-09 23:13:26.362115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.194 [2024-06-09 23:13:26.366838] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.194 [2024-06-09 23:13:26.367295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.194 [2024-06-09 23:13:26.367814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.194 [2024-06-09 23:13:26.367830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.194 [2024-06-09 23:13:26.367839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.194 [2024-06-09 23:13:26.368003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.194 [2024-06-09 23:13:26.368167] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.194 [2024-06-09 23:13:26.368175] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.194 [2024-06-09 23:13:26.368182] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.457 [2024-06-09 23:13:26.370486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.457 [2024-06-09 23:13:26.379255] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.457 [2024-06-09 23:13:26.379932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.457 [2024-06-09 23:13:26.380417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.457 [2024-06-09 23:13:26.380428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.457 [2024-06-09 23:13:26.380435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.457 [2024-06-09 23:13:26.380542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.457 [2024-06-09 23:13:26.380648] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.458 [2024-06-09 23:13:26.380656] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.458 [2024-06-09 23:13:26.380663] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.458 [2024-06-09 23:13:26.382973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.458 [2024-06-09 23:13:26.391831] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.458 [2024-06-09 23:13:26.392485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.392955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.392964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.458 [2024-06-09 23:13:26.392972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.458 [2024-06-09 23:13:26.393095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.458 [2024-06-09 23:13:26.393238] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.458 [2024-06-09 23:13:26.393245] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.458 [2024-06-09 23:13:26.393252] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.458 [2024-06-09 23:13:26.395586] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.458 [2024-06-09 23:13:26.404237] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.458 [2024-06-09 23:13:26.405028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.405549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.405564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.458 [2024-06-09 23:13:26.405574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.458 [2024-06-09 23:13:26.405741] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.458 [2024-06-09 23:13:26.405923] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.458 [2024-06-09 23:13:26.405931] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.458 [2024-06-09 23:13:26.405939] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.458 [2024-06-09 23:13:26.408422] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.458 [2024-06-09 23:13:26.416732] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.458 [2024-06-09 23:13:26.417408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.417892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.417902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.458 [2024-06-09 23:13:26.417910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.458 [2024-06-09 23:13:26.418034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.458 [2024-06-09 23:13:26.418176] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.458 [2024-06-09 23:13:26.418184] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.458 [2024-06-09 23:13:26.418190] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.458 [2024-06-09 23:13:26.420413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.458 [2024-06-09 23:13:26.428978] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.458 [2024-06-09 23:13:26.429693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.430209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.430222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.458 [2024-06-09 23:13:26.430231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.458 [2024-06-09 23:13:26.430396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.458 [2024-06-09 23:13:26.430494] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.458 [2024-06-09 23:13:26.430502] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.458 [2024-06-09 23:13:26.430510] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.458 [2024-06-09 23:13:26.432897] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.458 [2024-06-09 23:13:26.441560] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.458 [2024-06-09 23:13:26.441925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.442441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.442460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.458 [2024-06-09 23:13:26.442469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.458 [2024-06-09 23:13:26.442597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.458 [2024-06-09 23:13:26.442708] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.458 [2024-06-09 23:13:26.442717] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.458 [2024-06-09 23:13:26.442724] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.458 [2024-06-09 23:13:26.445149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.458 [2024-06-09 23:13:26.454047] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.458 [2024-06-09 23:13:26.454736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.455256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.455266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.458 [2024-06-09 23:13:26.455273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.458 [2024-06-09 23:13:26.455555] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.458 [2024-06-09 23:13:26.455662] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.458 [2024-06-09 23:13:26.455670] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.458 [2024-06-09 23:13:26.455676] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.458 [2024-06-09 23:13:26.457684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.458 [2024-06-09 23:13:26.466559] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.458 [2024-06-09 23:13:26.467237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.467852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.467889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.458 [2024-06-09 23:13:26.467900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.458 [2024-06-09 23:13:26.468044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.458 [2024-06-09 23:13:26.468245] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.458 [2024-06-09 23:13:26.468253] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.458 [2024-06-09 23:13:26.468260] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.458 [2024-06-09 23:13:26.470364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.458 [2024-06-09 23:13:26.479364] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.458 [2024-06-09 23:13:26.480056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.480577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.480592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.458 [2024-06-09 23:13:26.480602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.458 [2024-06-09 23:13:26.480800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.458 [2024-06-09 23:13:26.480928] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.458 [2024-06-09 23:13:26.480936] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.458 [2024-06-09 23:13:26.480943] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.458 [2024-06-09 23:13:26.483153] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.458 [2024-06-09 23:13:26.491628] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.458 [2024-06-09 23:13:26.492354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.492863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.458 [2024-06-09 23:13:26.492900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.458 [2024-06-09 23:13:26.492911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.458 [2024-06-09 23:13:26.493055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.458 [2024-06-09 23:13:26.493182] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.459 [2024-06-09 23:13:26.493191] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.459 [2024-06-09 23:13:26.493198] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.459 [2024-06-09 23:13:26.495629] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.459 [2024-06-09 23:13:26.504160] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.459 [2024-06-09 23:13:26.504922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.505442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.505456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.459 [2024-06-09 23:13:26.505465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.459 [2024-06-09 23:13:26.505646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.459 [2024-06-09 23:13:26.505809] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.459 [2024-06-09 23:13:26.505817] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.459 [2024-06-09 23:13:26.505825] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.459 [2024-06-09 23:13:26.508216] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.459 [2024-06-09 23:13:26.516783] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.459 [2024-06-09 23:13:26.517481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.518005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.518018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.459 [2024-06-09 23:13:26.518031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.459 [2024-06-09 23:13:26.518193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.459 [2024-06-09 23:13:26.518321] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.459 [2024-06-09 23:13:26.518329] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.459 [2024-06-09 23:13:26.518336] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.459 [2024-06-09 23:13:26.520606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.459 [2024-06-09 23:13:26.529473] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.459 [2024-06-09 23:13:26.530104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.530714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.530751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.459 [2024-06-09 23:13:26.530761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.459 [2024-06-09 23:13:26.530960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.459 [2024-06-09 23:13:26.531087] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.459 [2024-06-09 23:13:26.531095] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.459 [2024-06-09 23:13:26.531103] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.459 [2024-06-09 23:13:26.533370] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.459 [2024-06-09 23:13:26.541852] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.459 [2024-06-09 23:13:26.542704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.543234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.543248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.459 [2024-06-09 23:13:26.543257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.459 [2024-06-09 23:13:26.543408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.459 [2024-06-09 23:13:26.543517] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.459 [2024-06-09 23:13:26.543525] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.459 [2024-06-09 23:13:26.543533] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.459 [2024-06-09 23:13:26.545795] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.459 [2024-06-09 23:13:26.554347] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.459 [2024-06-09 23:13:26.555128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.555656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.555671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.459 [2024-06-09 23:13:26.555681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.459 [2024-06-09 23:13:26.555866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.459 [2024-06-09 23:13:26.556012] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.459 [2024-06-09 23:13:26.556020] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.459 [2024-06-09 23:13:26.556028] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.459 [2024-06-09 23:13:26.558038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.459 [2024-06-09 23:13:26.566727] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.459 [2024-06-09 23:13:26.567400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.567904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.567940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.459 [2024-06-09 23:13:26.567951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.459 [2024-06-09 23:13:26.568095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.459 [2024-06-09 23:13:26.568240] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.459 [2024-06-09 23:13:26.568248] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.459 [2024-06-09 23:13:26.568257] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.459 [2024-06-09 23:13:26.570471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.459 [2024-06-09 23:13:26.579310] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.459 [2024-06-09 23:13:26.580010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.580592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.580629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.459 [2024-06-09 23:13:26.580639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.459 [2024-06-09 23:13:26.580801] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.459 [2024-06-09 23:13:26.580928] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.459 [2024-06-09 23:13:26.580936] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.459 [2024-06-09 23:13:26.580944] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.459 [2024-06-09 23:13:26.583120] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.459 [2024-06-09 23:13:26.591728] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.459 [2024-06-09 23:13:26.592455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.592971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.592984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.459 [2024-06-09 23:13:26.592993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.459 [2024-06-09 23:13:26.593155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.459 [2024-06-09 23:13:26.593323] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.459 [2024-06-09 23:13:26.593332] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.459 [2024-06-09 23:13:26.593339] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.459 [2024-06-09 23:13:26.595824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.459 [2024-06-09 23:13:26.604316] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.459 [2024-06-09 23:13:26.605111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.605624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.459 [2024-06-09 23:13:26.605639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.459 [2024-06-09 23:13:26.605648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.460 [2024-06-09 23:13:26.605828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.460 [2024-06-09 23:13:26.605956] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.460 [2024-06-09 23:13:26.605964] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.460 [2024-06-09 23:13:26.605971] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.460 [2024-06-09 23:13:26.608126] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.460 [2024-06-09 23:13:26.616889] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.460 [2024-06-09 23:13:26.617628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.460 [2024-06-09 23:13:26.618152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.460 [2024-06-09 23:13:26.618164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.460 [2024-06-09 23:13:26.618173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.460 [2024-06-09 23:13:26.618317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.460 [2024-06-09 23:13:26.618468] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.460 [2024-06-09 23:13:26.618477] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.460 [2024-06-09 23:13:26.618484] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.460 [2024-06-09 23:13:26.620657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.460 [2024-06-09 23:13:26.629377] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.460 [2024-06-09 23:13:26.629803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.460 [2024-06-09 23:13:26.630323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.460 [2024-06-09 23:13:26.630332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.460 [2024-06-09 23:13:26.630340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.460 [2024-06-09 23:13:26.630451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.460 [2024-06-09 23:13:26.630576] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.460 [2024-06-09 23:13:26.630590] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.460 [2024-06-09 23:13:26.630597] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.460 [2024-06-09 23:13:26.632672] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.723 [2024-06-09 23:13:26.641872] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.723 [2024-06-09 23:13:26.642279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.723 [2024-06-09 23:13:26.642665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.723 [2024-06-09 23:13:26.642702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.723 [2024-06-09 23:13:26.642713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.723 [2024-06-09 23:13:26.642912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.723 [2024-06-09 23:13:26.643094] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.723 [2024-06-09 23:13:26.643104] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.723 [2024-06-09 23:13:26.643111] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.723 [2024-06-09 23:13:26.645417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.723 [2024-06-09 23:13:26.654223] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.723 [2024-06-09 23:13:26.654957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.723 [2024-06-09 23:13:26.655529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.723 [2024-06-09 23:13:26.655544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.723 [2024-06-09 23:13:26.655554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.723 [2024-06-09 23:13:26.655698] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.723 [2024-06-09 23:13:26.655881] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.723 [2024-06-09 23:13:26.655889] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.723 [2024-06-09 23:13:26.655896] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.723 [2024-06-09 23:13:26.658055] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.723 [2024-06-09 23:13:26.666753] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.723 [2024-06-09 23:13:26.667407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.723 [2024-06-09 23:13:26.667894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.723 [2024-06-09 23:13:26.667932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.723 [2024-06-09 23:13:26.667943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.723 [2024-06-09 23:13:26.668123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.723 [2024-06-09 23:13:26.668234] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.723 [2024-06-09 23:13:26.668243] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.723 [2024-06-09 23:13:26.668255] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.723 [2024-06-09 23:13:26.670376] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.723 [2024-06-09 23:13:26.679312] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.723 [2024-06-09 23:13:26.680067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.723 [2024-06-09 23:13:26.680500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.723 [2024-06-09 23:13:26.680516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.723 [2024-06-09 23:13:26.680525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.723 [2024-06-09 23:13:26.680651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.723 [2024-06-09 23:13:26.680797] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.723 [2024-06-09 23:13:26.680806] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.723 [2024-06-09 23:13:26.680813] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.723 [2024-06-09 23:13:26.682969] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.723 [2024-06-09 23:13:26.691909] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.723 [2024-06-09 23:13:26.692408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.723 [2024-06-09 23:13:26.693083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.723 [2024-06-09 23:13:26.693121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.723 [2024-06-09 23:13:26.693132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.723 [2024-06-09 23:13:26.693258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.723 [2024-06-09 23:13:26.693431] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.723 [2024-06-09 23:13:26.693441] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.723 [2024-06-09 23:13:26.693448] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.723 [2024-06-09 23:13:26.695654] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.723 [2024-06-09 23:13:26.704372] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.723 [2024-06-09 23:13:26.705106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.723 [2024-06-09 23:13:26.705608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.723 [2024-06-09 23:13:26.705646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.723 [2024-06-09 23:13:26.705657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.723 [2024-06-09 23:13:26.705782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.723 [2024-06-09 23:13:26.705946] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.723 [2024-06-09 23:13:26.705956] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.723 [2024-06-09 23:13:26.705963] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.723 [2024-06-09 23:13:26.708233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.723 [2024-06-09 23:13:26.716906] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.724 [2024-06-09 23:13:26.717620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.718140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.718154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.724 [2024-06-09 23:13:26.718164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.724 [2024-06-09 23:13:26.718307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.724 [2024-06-09 23:13:26.718459] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.724 [2024-06-09 23:13:26.718469] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.724 [2024-06-09 23:13:26.718476] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.724 [2024-06-09 23:13:26.720757] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.724 [2024-06-09 23:13:26.729392] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.724 [2024-06-09 23:13:26.730075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.730701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.730740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.724 [2024-06-09 23:13:26.730750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.724 [2024-06-09 23:13:26.730894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.724 [2024-06-09 23:13:26.731058] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.724 [2024-06-09 23:13:26.731067] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.724 [2024-06-09 23:13:26.731075] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.724 [2024-06-09 23:13:26.733375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.724 [2024-06-09 23:13:26.741935] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.724 [2024-06-09 23:13:26.742667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.743208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.743222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.724 [2024-06-09 23:13:26.743232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.724 [2024-06-09 23:13:26.743375] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.724 [2024-06-09 23:13:26.743546] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.724 [2024-06-09 23:13:26.743556] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.724 [2024-06-09 23:13:26.743563] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.724 [2024-06-09 23:13:26.745714] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.724 [2024-06-09 23:13:26.754229] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.724 [2024-06-09 23:13:26.754892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.755440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.755460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.724 [2024-06-09 23:13:26.755468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.724 [2024-06-09 23:13:26.755578] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.724 [2024-06-09 23:13:26.755723] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.724 [2024-06-09 23:13:26.755731] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.724 [2024-06-09 23:13:26.755739] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.724 [2024-06-09 23:13:26.758054] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.724 [2024-06-09 23:13:26.766873] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.724 [2024-06-09 23:13:26.767623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.768188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.768201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.724 [2024-06-09 23:13:26.768210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.724 [2024-06-09 23:13:26.768354] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.724 [2024-06-09 23:13:26.768507] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.724 [2024-06-09 23:13:26.768517] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.724 [2024-06-09 23:13:26.768524] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.724 [2024-06-09 23:13:26.770893] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.724 [2024-06-09 23:13:26.779470] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.724 [2024-06-09 23:13:26.780244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.780793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.780809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.724 [2024-06-09 23:13:26.780819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.724 [2024-06-09 23:13:26.780999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.724 [2024-06-09 23:13:26.781090] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.724 [2024-06-09 23:13:26.781098] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.724 [2024-06-09 23:13:26.781106] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.724 [2024-06-09 23:13:26.783336] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.724 [2024-06-09 23:13:26.792062] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.724 [2024-06-09 23:13:26.792714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.793278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.793292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.724 [2024-06-09 23:13:26.793302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.724 [2024-06-09 23:13:26.793489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.724 [2024-06-09 23:13:26.793636] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.724 [2024-06-09 23:13:26.793645] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.724 [2024-06-09 23:13:26.793653] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.724 [2024-06-09 23:13:26.795841] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.724 [2024-06-09 23:13:26.804479] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.724 [2024-06-09 23:13:26.805267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.805791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.805807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.724 [2024-06-09 23:13:26.805817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.724 [2024-06-09 23:13:26.805942] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.724 [2024-06-09 23:13:26.806125] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.724 [2024-06-09 23:13:26.806134] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.724 [2024-06-09 23:13:26.806142] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.724 [2024-06-09 23:13:26.808371] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.724 [2024-06-09 23:13:26.816900] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.724 [2024-06-09 23:13:26.817660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.818203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.818217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.724 [2024-06-09 23:13:26.818227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.724 [2024-06-09 23:13:26.818414] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.724 [2024-06-09 23:13:26.818543] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.724 [2024-06-09 23:13:26.818552] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.724 [2024-06-09 23:13:26.818560] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.724 [2024-06-09 23:13:26.820530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.724 [2024-06-09 23:13:26.829226] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.724 [2024-06-09 23:13:26.829863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.830353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.724 [2024-06-09 23:13:26.830369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.724 [2024-06-09 23:13:26.830377] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.724 [2024-06-09 23:13:26.830523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.725 [2024-06-09 23:13:26.830686] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.725 [2024-06-09 23:13:26.830694] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.725 [2024-06-09 23:13:26.830701] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.725 [2024-06-09 23:13:26.832885] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.725 [2024-06-09 23:13:26.841590] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.725 [2024-06-09 23:13:26.841954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.725 [2024-06-09 23:13:26.842600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.725 [2024-06-09 23:13:26.842639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.725 [2024-06-09 23:13:26.842650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.725 [2024-06-09 23:13:26.842830] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.725 [2024-06-09 23:13:26.842940] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.725 [2024-06-09 23:13:26.842949] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.725 [2024-06-09 23:13:26.842957] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.725 [2024-06-09 23:13:26.845077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.725 [2024-06-09 23:13:26.854060] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.725 [2024-06-09 23:13:26.854612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.725 [2024-06-09 23:13:26.855140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.725 [2024-06-09 23:13:26.855151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.725 [2024-06-09 23:13:26.855159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.725 [2024-06-09 23:13:26.855266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.725 [2024-06-09 23:13:26.855431] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.725 [2024-06-09 23:13:26.855440] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.725 [2024-06-09 23:13:26.855447] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.725 [2024-06-09 23:13:26.857612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.725 [2024-06-09 23:13:26.866716] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.725 [2024-06-09 23:13:26.867351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.725 [2024-06-09 23:13:26.867848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.725 [2024-06-09 23:13:26.867860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.725 [2024-06-09 23:13:26.867872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.725 [2024-06-09 23:13:26.867978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.725 [2024-06-09 23:13:26.868139] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.725 [2024-06-09 23:13:26.868148] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.725 [2024-06-09 23:13:26.868154] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.725 [2024-06-09 23:13:26.870356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.725 [2024-06-09 23:13:26.879313] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.725 [2024-06-09 23:13:26.879956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.725 [2024-06-09 23:13:26.880369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.725 [2024-06-09 23:13:26.880380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.725 [2024-06-09 23:13:26.880388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.725 [2024-06-09 23:13:26.880517] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.725 [2024-06-09 23:13:26.880624] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.725 [2024-06-09 23:13:26.880632] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.725 [2024-06-09 23:13:26.880640] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.725 [2024-06-09 23:13:26.882848] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.725 [2024-06-09 23:13:26.891895] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.725 [2024-06-09 23:13:26.892645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.725 [2024-06-09 23:13:26.893160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.725 [2024-06-09 23:13:26.893174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.725 [2024-06-09 23:13:26.893184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.725 [2024-06-09 23:13:26.893346] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.725 [2024-06-09 23:13:26.893463] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.725 [2024-06-09 23:13:26.893473] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.725 [2024-06-09 23:13:26.893481] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.725 [2024-06-09 23:13:26.895708] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.988 [2024-06-09 23:13:26.904343] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.988 [2024-06-09 23:13:26.905105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.988 [2024-06-09 23:13:26.905673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.988 [2024-06-09 23:13:26.905689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.988 [2024-06-09 23:13:26.905699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.988 [2024-06-09 23:13:26.905847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.988 [2024-06-09 23:13:26.905977] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.988 [2024-06-09 23:13:26.905986] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.988 [2024-06-09 23:13:26.905994] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.988 [2024-06-09 23:13:26.908204] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.988 [2024-06-09 23:13:26.916811] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.988 [2024-06-09 23:13:26.917601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.988 [2024-06-09 23:13:26.918119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.988 [2024-06-09 23:13:26.918134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.988 [2024-06-09 23:13:26.918143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.988 [2024-06-09 23:13:26.918306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.988 [2024-06-09 23:13:26.918458] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.988 [2024-06-09 23:13:26.918467] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.988 [2024-06-09 23:13:26.918475] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.988 [2024-06-09 23:13:26.920500] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.988 [2024-06-09 23:13:26.929306] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.988 [2024-06-09 23:13:26.929964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.988 [2024-06-09 23:13:26.930619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.988 [2024-06-09 23:13:26.930657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.988 [2024-06-09 23:13:26.930669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.988 [2024-06-09 23:13:26.930868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.988 [2024-06-09 23:13:26.930978] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.988 [2024-06-09 23:13:26.930988] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.988 [2024-06-09 23:13:26.930996] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.988 [2024-06-09 23:13:26.933333] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.988 [2024-06-09 23:13:26.941864] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.988 [2024-06-09 23:13:26.942657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.988 [2024-06-09 23:13:26.943198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.988 [2024-06-09 23:13:26.943213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.988 [2024-06-09 23:13:26.943222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.988 [2024-06-09 23:13:26.943348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.989 [2024-06-09 23:13:26.943560] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.989 [2024-06-09 23:13:26.943570] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.989 [2024-06-09 23:13:26.943577] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.989 [2024-06-09 23:13:26.945820] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.989 [2024-06-09 23:13:26.954259] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.989 [2024-06-09 23:13:26.954925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:26.955306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:26.955318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.989 [2024-06-09 23:13:26.955327] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.989 [2024-06-09 23:13:26.955456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.989 [2024-06-09 23:13:26.955619] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.989 [2024-06-09 23:13:26.955628] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.989 [2024-06-09 23:13:26.955636] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.989 [2024-06-09 23:13:26.957786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.989 [2024-06-09 23:13:26.966670] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.989 [2024-06-09 23:13:26.967295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:26.967889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:26.967928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.989 [2024-06-09 23:13:26.967939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.989 [2024-06-09 23:13:26.968119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.989 [2024-06-09 23:13:26.968266] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.989 [2024-06-09 23:13:26.968275] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.989 [2024-06-09 23:13:26.968283] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.989 [2024-06-09 23:13:26.970538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.989 [2024-06-09 23:13:26.979139] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.989 [2024-06-09 23:13:26.979897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:26.980172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:26.980186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.989 [2024-06-09 23:13:26.980196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.989 [2024-06-09 23:13:26.980358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.989 [2024-06-09 23:13:26.980549] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.989 [2024-06-09 23:13:26.980564] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.989 [2024-06-09 23:13:26.980572] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.989 [2024-06-09 23:13:26.982671] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.989 [2024-06-09 23:13:26.991675] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.989 [2024-06-09 23:13:26.992087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:26.992621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:26.992633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.989 [2024-06-09 23:13:26.992641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.989 [2024-06-09 23:13:26.992820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.989 [2024-06-09 23:13:26.992945] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.989 [2024-06-09 23:13:26.992954] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.989 [2024-06-09 23:13:26.992960] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.989 [2024-06-09 23:13:26.995125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.989 23:13:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:58.989 23:13:26 -- common/autotest_common.sh@852 -- # return 0 00:30:58.989 23:13:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:58.989 23:13:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:58.989 23:13:26 -- common/autotest_common.sh@10 -- # set +x 00:30:58.989 [2024-06-09 23:13:27.004092] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.989 [2024-06-09 23:13:27.004903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:27.005211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:27.005233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.989 [2024-06-09 23:13:27.005243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.989 [2024-06-09 23:13:27.005388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.989 [2024-06-09 23:13:27.005524] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.989 [2024-06-09 23:13:27.005535] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.989 [2024-06-09 23:13:27.005543] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.989 [2024-06-09 23:13:27.007806] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.989 [2024-06-09 23:13:27.016675] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.989 [2024-06-09 23:13:27.017334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:27.017836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:27.017848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.989 [2024-06-09 23:13:27.017856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.989 [2024-06-09 23:13:27.017981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.989 [2024-06-09 23:13:27.018129] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.989 [2024-06-09 23:13:27.018138] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.989 [2024-06-09 23:13:27.018145] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.989 [2024-06-09 23:13:27.020384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.989 [2024-06-09 23:13:27.029011] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.989 [2024-06-09 23:13:27.029764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:27.030290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:27.030305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.989 [2024-06-09 23:13:27.030315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.989 [2024-06-09 23:13:27.030448] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.989 [2024-06-09 23:13:27.030540] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.989 [2024-06-09 23:13:27.030549] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.989 [2024-06-09 23:13:27.030556] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.989 [2024-06-09 23:13:27.032692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.989 23:13:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:58.989 23:13:27 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:58.989 23:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:58.989 23:13:27 -- common/autotest_common.sh@10 -- # set +x 00:30:58.989 [2024-06-09 23:13:27.041524] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.989 [2024-06-09 23:13:27.041644] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:58.989 [2024-06-09 23:13:27.042317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:27.042855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.989 [2024-06-09 23:13:27.042872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.989 [2024-06-09 23:13:27.042882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.989 [2024-06-09 23:13:27.043044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.989 [2024-06-09 23:13:27.043209] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.989 [2024-06-09 23:13:27.043218] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.989 [2024-06-09 23:13:27.043226] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.989 [2024-06-09 23:13:27.045420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.989 23:13:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:58.989 23:13:27 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:58.989 23:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:58.989 23:13:27 -- common/autotest_common.sh@10 -- # set +x 00:30:58.989 [2024-06-09 23:13:27.054008] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.989 [2024-06-09 23:13:27.054793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.990 [2024-06-09 23:13:27.054938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.990 [2024-06-09 23:13:27.054957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.990 [2024-06-09 23:13:27.054967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.990 [2024-06-09 23:13:27.055055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.990 [2024-06-09 23:13:27.055202] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.990 [2024-06-09 23:13:27.055211] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.990 [2024-06-09 23:13:27.055219] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.990 [2024-06-09 23:13:27.057392] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.990 [2024-06-09 23:13:27.066571] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.990 [2024-06-09 23:13:27.067019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.990 [2024-06-09 23:13:27.067516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.990 [2024-06-09 23:13:27.067528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.990 [2024-06-09 23:13:27.067536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.990 [2024-06-09 23:13:27.067696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.990 [2024-06-09 23:13:27.067804] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.990 [2024-06-09 23:13:27.067812] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.990 [2024-06-09 23:13:27.067819] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.990 [2024-06-09 23:13:27.070002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.990 [2024-06-09 23:13:27.079009] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.990 [2024-06-09 23:13:27.079782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.990 [2024-06-09 23:13:27.080317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.990 [2024-06-09 23:13:27.080332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.990 [2024-06-09 23:13:27.080341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.990 [2024-06-09 23:13:27.080547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.990 [2024-06-09 23:13:27.080694] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.990 [2024-06-09 23:13:27.080703] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.990 [2024-06-09 23:13:27.080711] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.990 [2024-06-09 23:13:27.082753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.990 Malloc0 00:30:58.990 23:13:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:58.990 23:13:27 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:58.990 23:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:58.990 23:13:27 -- common/autotest_common.sh@10 -- # set +x 00:30:58.990 [2024-06-09 23:13:27.091803] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.990 [2024-06-09 23:13:27.092492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.990 [2024-06-09 23:13:27.093008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.990 [2024-06-09 23:13:27.093019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.990 [2024-06-09 23:13:27.093027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.990 [2024-06-09 23:13:27.093187] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.990 [2024-06-09 23:13:27.093330] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.990 [2024-06-09 23:13:27.093338] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.990 [2024-06-09 23:13:27.093346] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.990 [2024-06-09 23:13:27.095640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.990 23:13:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:58.990 23:13:27 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:58.990 23:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:58.990 23:13:27 -- common/autotest_common.sh@10 -- # set +x 00:30:58.990 [2024-06-09 23:13:27.104245] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.990 23:13:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:58.990 23:13:27 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.990 [2024-06-09 23:13:27.104963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.990 23:13:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:58.990 23:13:27 -- common/autotest_common.sh@10 -- # set +x 00:30:58.990 [2024-06-09 23:13:27.105436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.990 [2024-06-09 23:13:27.105448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa8b9c0 with addr=10.0.0.2, port=4420 00:30:58.990 [2024-06-09 23:13:27.105456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8b9c0 is same with the state(5) to be set 00:30:58.990 [2024-06-09 23:13:27.105580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8b9c0 (9): Bad file descriptor 00:30:58.990 [2024-06-09 23:13:27.105759] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.990 [2024-06-09 23:13:27.105767] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.990 [2024-06-09 23:13:27.105774] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.990 [2024-06-09 23:13:27.108066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.990 [2024-06-09 23:13:27.111655] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.990 23:13:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:58.990 [2024-06-09 23:13:27.116788] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.990 23:13:27 -- host/bdevperf.sh@38 -- # wait 103116 00:30:58.990 [2024-06-09 23:13:27.148336] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:08.997 00:31:08.997 Latency(us) 00:31:08.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.997 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:08.997 Verification LBA range: start 0x0 length 0x4000 00:31:08.997 Nvme1n1 : 15.01 13699.94 53.52 14399.47 0.00 4540.51 1262.93 21954.56 00:31:08.997 =================================================================================================================== 00:31:08.997 Total : 13699.94 53.52 14399.47 0.00 4540.51 1262.93 21954.56 00:31:08.997 23:13:35 -- host/bdevperf.sh@39 -- # sync 00:31:08.997 23:13:35 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:08.997 23:13:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.997 23:13:35 -- common/autotest_common.sh@10 -- # set +x 00:31:08.997 23:13:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.997 23:13:35 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:08.997 23:13:35 -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:08.997 23:13:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:08.997 23:13:35 -- nvmf/common.sh@116 -- # sync 00:31:08.997 23:13:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:08.997 23:13:35 -- nvmf/common.sh@119 -- # set +e 00:31:08.997 23:13:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:08.997 23:13:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:08.997 rmmod nvme_tcp 00:31:08.997 rmmod nvme_fabrics 00:31:08.997 rmmod nvme_keyring 00:31:08.997 23:13:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:08.997 23:13:35 -- nvmf/common.sh@123 -- # set -e 00:31:08.997 23:13:35 -- nvmf/common.sh@124 -- # return 0 00:31:08.997 23:13:35 -- nvmf/common.sh@477 -- # '[' -n 104449 ']' 00:31:08.997 23:13:35 -- nvmf/common.sh@478 -- # killprocess 104449 00:31:08.997 23:13:35 -- common/autotest_common.sh@926 -- # '[' -z 104449 ']' 00:31:08.997 23:13:35 -- common/autotest_common.sh@930 -- # kill -0 104449 00:31:08.997 23:13:35 -- common/autotest_common.sh@931 -- # uname 00:31:08.997 23:13:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:08.997 23:13:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104449 00:31:08.997 23:13:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:08.997 23:13:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:08.997 23:13:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104449' 00:31:08.997 killing process with pid 104449 00:31:08.997 23:13:35 -- common/autotest_common.sh@945 -- # kill 104449 00:31:08.997 23:13:35 -- common/autotest_common.sh@950 -- # wait 104449 00:31:08.997 23:13:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:08.997 23:13:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:08.997 23:13:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:08.998 23:13:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:08.998 23:13:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:08.998 23:13:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.998 23:13:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:08.998 23:13:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.941 23:13:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:09.941 00:31:09.941 real 0m27.445s 00:31:09.941 user 1m2.457s 00:31:09.941 sys 0m6.794s 00:31:09.941 23:13:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:09.941 23:13:38 -- common/autotest_common.sh@10 -- # set +x 00:31:09.941 ************************************ 00:31:09.941 END TEST nvmf_bdevperf 00:31:09.941 ************************************ 00:31:09.941 23:13:38 -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:09.941 23:13:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:09.941 23:13:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:09.941 23:13:38 -- common/autotest_common.sh@10 -- # set +x 00:31:09.941 ************************************ 00:31:09.941 START TEST nvmf_target_disconnect 00:31:09.941 ************************************ 00:31:09.941 23:13:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:10.202 * Looking for test storage... 00:31:10.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:10.202 23:13:38 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.202 23:13:38 -- nvmf/common.sh@7 -- # uname -s 00:31:10.202 23:13:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.202 23:13:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.202 23:13:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.202 23:13:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.202 23:13:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.202 23:13:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.202 23:13:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.202 23:13:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.202 23:13:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.202 23:13:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.202 23:13:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:10.202 23:13:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:10.202 23:13:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.202 23:13:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.202 23:13:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.202 23:13:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.202 23:13:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.202 23:13:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.202 23:13:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.202 23:13:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.202 23:13:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.202 23:13:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.202 23:13:38 -- paths/export.sh@5 -- # export PATH 00:31:10.202 23:13:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.202 23:13:38 -- nvmf/common.sh@46 -- # : 0 00:31:10.202 23:13:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:10.202 23:13:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:10.202 23:13:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:10.202 23:13:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.202 23:13:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.202 23:13:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:10.202 23:13:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:10.202 23:13:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:10.202 23:13:38 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:10.202 23:13:38 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:10.202 23:13:38 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:10.202 23:13:38 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:31:10.202 23:13:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:10.202 23:13:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.202 23:13:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:10.203 23:13:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:10.203 23:13:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:10.203 23:13:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.203 23:13:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:10.203 23:13:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.203 23:13:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:10.203 23:13:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:10.203 23:13:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:10.203 23:13:38 -- common/autotest_common.sh@10 -- # set +x 00:31:16.796 23:13:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:16.796 23:13:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:16.796 23:13:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:16.796 23:13:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:16.796 23:13:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:16.796 23:13:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:16.796 23:13:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:16.796 23:13:44 -- nvmf/common.sh@294 -- # net_devs=() 00:31:16.796 23:13:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:16.796 23:13:44 -- nvmf/common.sh@295 -- # e810=() 00:31:16.796 23:13:44 -- nvmf/common.sh@295 -- # local -ga e810 00:31:16.796 23:13:44 -- nvmf/common.sh@296 -- # x722=() 00:31:16.796 23:13:44 -- nvmf/common.sh@296 -- # local -ga x722 00:31:16.796 23:13:44 -- nvmf/common.sh@297 -- # mlx=() 00:31:16.796 23:13:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:16.796 23:13:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.796 23:13:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.796 23:13:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.796 23:13:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.796 23:13:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.796 23:13:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.796 23:13:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.796 23:13:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.796 23:13:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.796 23:13:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.796 23:13:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.796 23:13:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:16.796 23:13:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:16.796 23:13:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:16.796 23:13:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:16.796 23:13:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:16.796 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:16.796 23:13:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:16.796 23:13:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:16.796 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:16.796 23:13:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:16.796 23:13:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:16.796 23:13:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.796 23:13:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:16.796 23:13:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.796 23:13:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:16.796 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:16.796 23:13:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.796 23:13:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:16.796 23:13:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.796 23:13:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:16.796 23:13:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.796 23:13:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:16.796 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:16.796 23:13:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.796 23:13:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:16.796 23:13:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:16.796 23:13:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:16.796 23:13:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:16.796 23:13:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.796 23:13:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.796 23:13:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.796 23:13:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:16.796 23:13:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.796 23:13:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.796 23:13:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:16.796 23:13:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.796 23:13:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.796 23:13:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:16.796 23:13:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:16.796 23:13:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.796 23:13:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:17.057 23:13:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:17.057 23:13:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:17.057 23:13:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:17.057 23:13:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:17.057 23:13:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:17.057 23:13:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:17.057 23:13:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:17.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:31:17.057 00:31:17.057 --- 10.0.0.2 ping statistics --- 00:31:17.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.057 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:31:17.057 23:13:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:17.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.431 ms 00:31:17.057 00:31:17.057 --- 10.0.0.1 ping statistics --- 00:31:17.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.057 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:31:17.057 23:13:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.057 23:13:45 -- nvmf/common.sh@410 -- # return 0 00:31:17.057 23:13:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:17.057 23:13:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.057 23:13:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:17.057 23:13:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:17.057 23:13:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.057 23:13:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:17.057 23:13:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:17.319 23:13:45 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:17.319 23:13:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:17.319 23:13:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:17.319 23:13:45 -- common/autotest_common.sh@10 -- # set +x 00:31:17.319 ************************************ 00:31:17.319 START TEST nvmf_target_disconnect_tc1 00:31:17.319 ************************************ 00:31:17.319 23:13:45 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:31:17.319 23:13:45 -- host/target_disconnect.sh@32 -- # set +e 00:31:17.319 23:13:45 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:17.319 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.319 [2024-06-09 23:13:45.353465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.319 [2024-06-09 23:13:45.353998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.319 [2024-06-09 23:13:45.354015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16d2860 with addr=10.0.0.2, port=4420 00:31:17.319 [2024-06-09 23:13:45.354042] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:17.319 [2024-06-09 23:13:45.354054] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:17.319 [2024-06-09 23:13:45.354069] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:31:17.319 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:17.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:17.319 Initializing NVMe Controllers 00:31:17.319 23:13:45 -- host/target_disconnect.sh@33 -- # trap - ERR 00:31:17.319 23:13:45 -- host/target_disconnect.sh@33 -- # print_backtrace 00:31:17.319 23:13:45 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:31:17.319 23:13:45 -- common/autotest_common.sh@1132 -- # return 0 00:31:17.319 23:13:45 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:31:17.319 23:13:45 -- host/target_disconnect.sh@41 -- # set -e 00:31:17.319 00:31:17.319 real 0m0.105s 00:31:17.319 user 0m0.044s 00:31:17.319 sys 0m0.060s 00:31:17.319 23:13:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:17.319 23:13:45 -- common/autotest_common.sh@10 -- # set +x 00:31:17.319 ************************************ 00:31:17.319 END TEST nvmf_target_disconnect_tc1 00:31:17.319 ************************************ 00:31:17.319 23:13:45 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:17.319 23:13:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:17.319 23:13:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:17.319 23:13:45 -- common/autotest_common.sh@10 -- # set +x 00:31:17.319 ************************************ 00:31:17.319 START TEST nvmf_target_disconnect_tc2 00:31:17.319 ************************************ 00:31:17.319 23:13:45 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:31:17.319 23:13:45 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:31:17.319 23:13:45 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:17.319 23:13:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:17.319 23:13:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:17.319 23:13:45 -- common/autotest_common.sh@10 -- # set +x 00:31:17.319 23:13:45 -- nvmf/common.sh@469 -- # nvmfpid=110417 00:31:17.319 23:13:45 -- nvmf/common.sh@470 -- # waitforlisten 110417 00:31:17.319 23:13:45 -- common/autotest_common.sh@819 -- # '[' -z 110417 ']' 00:31:17.319 23:13:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:17.319 23:13:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.319 23:13:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:17.319 23:13:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.319 23:13:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:17.319 23:13:45 -- common/autotest_common.sh@10 -- # set +x 00:31:17.319 [2024-06-09 23:13:45.478611] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:17.319 [2024-06-09 23:13:45.478690] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.581 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.581 [2024-06-09 23:13:45.566387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:17.581 [2024-06-09 23:13:45.659024] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:17.581 [2024-06-09 23:13:45.659179] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.581 [2024-06-09 23:13:45.659190] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.581 [2024-06-09 23:13:45.659197] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.581 [2024-06-09 23:13:45.659356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:17.581 [2024-06-09 23:13:45.659515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:17.581 [2024-06-09 23:13:45.659840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:17.581 [2024-06-09 23:13:45.659842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:18.154 23:13:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:18.154 23:13:46 -- common/autotest_common.sh@852 -- # return 0 00:31:18.154 23:13:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:18.154 23:13:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:18.154 23:13:46 -- common/autotest_common.sh@10 -- # set +x 00:31:18.154 23:13:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:18.154 23:13:46 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:18.154 23:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.154 23:13:46 -- common/autotest_common.sh@10 -- # set +x 00:31:18.154 Malloc0 00:31:18.154 23:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.154 23:13:46 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:18.154 23:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.154 23:13:46 -- common/autotest_common.sh@10 -- # set +x 00:31:18.415 [2024-06-09 23:13:46.337644] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:18.415 23:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.415 23:13:46 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:18.415 23:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.415 23:13:46 -- common/autotest_common.sh@10 -- # set +x 00:31:18.415 23:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.415 23:13:46 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:18.415 23:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.415 23:13:46 -- common/autotest_common.sh@10 -- # set +x 00:31:18.415 23:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.415 23:13:46 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:18.415 23:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.415 23:13:46 -- common/autotest_common.sh@10 -- # set +x 00:31:18.415 [2024-06-09 23:13:46.378012] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.415 23:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.415 23:13:46 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:18.415 23:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.415 23:13:46 -- common/autotest_common.sh@10 -- # set +x 00:31:18.415 23:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.415 23:13:46 -- host/target_disconnect.sh@50 -- # reconnectpid=110545 00:31:18.415 23:13:46 -- host/target_disconnect.sh@52 -- # sleep 2 00:31:18.415 23:13:46 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:18.415 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.396 23:13:48 -- host/target_disconnect.sh@53 -- # kill -9 110417 00:31:20.396 23:13:48 -- host/target_disconnect.sh@55 -- # sleep 2 00:31:20.396 Read completed with error (sct=0, sc=8) 00:31:20.396 starting I/O failed 00:31:20.396 Read completed with error (sct=0, sc=8) 00:31:20.396 starting I/O failed 00:31:20.396 Read completed with error (sct=0, sc=8) 00:31:20.396 starting I/O failed 00:31:20.396 Read completed with error (sct=0, sc=8) 00:31:20.396 starting I/O failed 00:31:20.396 Read completed with error (sct=0, sc=8) 00:31:20.396 starting I/O failed 00:31:20.396 Read completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Read completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Read completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Read completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Read completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Read completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Read completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Read completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Read completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Write completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Write completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Read completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Write completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Write completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Write completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Write completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Write completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Read completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Write completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Read completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Write completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Write completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Write completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Write completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Write completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Write completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 Read completed with error (sct=0, sc=8) 00:31:20.397 starting I/O failed 00:31:20.397 [2024-06-09 23:13:48.410068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:20.397 [2024-06-09 23:13:48.410337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.410699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.410726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.411119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.411751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.411780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.412304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.412883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.412913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.413623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.414174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.414185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.414797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.415342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.415352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.415945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.416598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.416627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.417139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.417751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.417779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.418142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.418761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.418790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.419155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.419780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.419811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.420187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.420707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.420737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.421219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.421661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.421690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.422208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.422626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.422654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.423182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.423806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.423835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.424192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.424676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.424705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.425099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.425695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.425724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.426099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.426705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.426734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.427104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.427469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.427478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.428014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.428384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.428391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.428667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.429040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.429047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.429533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.430035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.430042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.430559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.431080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.431087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.397 qpair failed and we were unable to recover it. 00:31:20.397 [2024-06-09 23:13:48.431554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.431897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.397 [2024-06-09 23:13:48.431906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.432278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.432778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.432786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.433017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.433223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.433236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.433753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.434280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.434288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.434777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.435138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.435145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.435765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.436162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.436173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.436751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.437162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.437173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.437634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.438034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.438044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.438539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.439015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.439022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.439540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.440008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.440016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.440358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.440834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.440842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.441342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.441739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.441746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.442126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.442622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.442650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.443041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.443587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.443596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.444078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.444604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.444633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.445032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.445524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.445533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.446009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.446437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.446445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.446939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.447447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.447455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.447967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.448362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.448370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.448864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.449307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.449315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.449798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.450216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.450224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.450316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.450742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.450750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.451212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.451439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.451450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.451926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.452304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.452312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.452707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.453225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.453233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.453640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.454119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.454132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.454711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.455107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.455118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.455721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.456267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.456277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.456861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.457393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.457416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.398 qpair failed and we were unable to recover it. 00:31:20.398 [2024-06-09 23:13:48.458037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.458656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.398 [2024-06-09 23:13:48.458685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.459116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.459703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.459732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.460094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.460698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.460727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.461244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.461774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.461803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.462219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.462764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.462793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.463241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.463787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.463816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.464334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.464766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.464799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.465268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.465953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.465982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.466624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.467129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.467139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.467744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.468138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.468149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.468745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.469261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.469271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.469873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.470268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.470279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.470641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.471026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.471038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.471527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.472060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.472067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.472538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.472900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.472907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.473415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.473659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.473672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.473902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.474386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.474397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.474846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.475322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.475329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.475660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.476092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.476100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.476590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.477084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.477092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.477601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.477963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.477970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.478480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.478956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.478963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.479534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.479989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.479997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.480266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.480743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.480751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.481225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.481708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.481736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.482320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.482896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.482925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.483618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.484013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.484027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.484525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.485050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.485057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.485538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.486047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.486056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.399 [2024-06-09 23:13:48.486566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.487043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.399 [2024-06-09 23:13:48.487050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.399 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.487700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.488252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.488262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.488859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.489410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.489421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.489917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.490623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.490652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.491135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.491713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.491742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.492231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.492346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.492358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.492821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.493009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.493020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.493513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.494006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.494013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.494499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.494954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.494962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.495468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.495957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.495965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.496448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.496803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.496811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.497245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.497698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.497706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.498193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.498751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.498780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.499150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.499751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.499780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.500314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.500686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.500695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.501166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.501614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.501642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.502011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.502353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.502361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.502781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.503229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.503237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.503832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.504337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.504348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.504936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.505635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.505664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.506070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.506640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.506669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.507179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.507746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.507775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.508256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.508832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.508860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.509316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.509859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.509888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.510095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.510551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.510559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.510794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.511283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.511291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.511691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.512209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.512217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.512603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.513093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.513101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.513626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.514145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.514152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.400 [2024-06-09 23:13:48.514662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.515012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.400 [2024-06-09 23:13:48.515020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.400 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.515257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.515479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.515491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.515949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.516432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.516439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.516931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.517447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.517454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.517851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.518369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.518376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.518885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.519233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.519241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.519737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.520272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.520279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.520867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.521424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.521443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.521965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.522621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.522650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.523036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.523392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.523400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.523884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.524372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.524379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.524852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.525246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.525257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.525849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.526352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.526362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.526930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.527394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.527408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.527954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.528343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.528353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.528832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.529346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.529356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.529973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.530631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.530660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.531148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.531684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.531713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.532228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.532794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.532823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.533182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.533678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.533707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.534223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.534823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.534852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.535343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.535915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.535943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.536626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.537146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.537156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.537763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.538300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.401 [2024-06-09 23:13:48.538310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.401 qpair failed and we were unable to recover it. 00:31:20.401 [2024-06-09 23:13:48.538797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.539161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.539169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.539755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.540294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.540304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.540540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.541054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.541063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.541592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.542110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.542117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.542747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.543326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.543336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.543885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.544399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.544412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.544897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.545426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.545445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.545968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.546373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.546381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.546893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.547368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.547375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.547944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.548628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.548657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.549173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.549696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.549726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.550222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.550782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.550811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.551041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.551584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.551592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.551977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.552510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.552517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.553010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.553501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.553509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.554094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.554609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.554617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.555134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.555670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.555699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.556197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.556786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.556815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.557311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.557818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.402 [2024-06-09 23:13:48.557827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.402 qpair failed and we were unable to recover it. 00:31:20.402 [2024-06-09 23:13:48.558303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.558903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.558932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-09 23:13:48.559411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.559806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.559814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-09 23:13:48.560306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.560810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.560819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-09 23:13:48.561322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.561867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.561875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-09 23:13:48.562374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.562942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.562971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-09 23:13:48.563324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.563874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.563883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-09 23:13:48.564398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.564840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.564869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-09 23:13:48.565363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.565958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.670 [2024-06-09 23:13:48.565987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.670 qpair failed and we were unable to recover it. 00:31:20.670 [2024-06-09 23:13:48.566652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.567174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.567184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.567787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.568335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.568345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.568937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.569598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.569627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.570134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.570685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.570714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.571209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.571773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.571802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.572317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.572905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.572934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.573638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.574168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.574177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.574839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.575374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.575384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.575975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.576371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.576382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.576953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.577623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.577652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.578010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.578406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.578414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.578815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.579298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.579305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.579901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.580634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.580663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.581058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.581640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.581669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.582164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.582780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.582808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.583295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.583818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.583826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.584405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.584978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.585006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.585411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.585927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.585956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.586622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.587015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.587025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.587230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.587681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.587710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.587946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.588441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.588450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.588949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.589414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.589422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.589894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.590408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.590416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.590999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.591648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.591677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.591905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.592387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.592395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.592912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.593635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.593664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.594229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.594747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.594776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.595281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.595848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.595877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.596383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.596902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.596931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.671 qpair failed and we were unable to recover it. 00:31:20.671 [2024-06-09 23:13:48.597643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.671 [2024-06-09 23:13:48.598181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.598191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.598716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.599221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.599231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.599704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.600257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.600267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.600770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.601324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.601334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.601938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.602613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.602641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.603155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.603628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.603657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.604177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.604640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.604669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.604906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.605429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.605438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.605855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.606374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.606381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.606886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.607374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.607382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.607865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.608343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.608350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.608805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.609317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.609327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.609873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.610359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.610368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.610960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.611636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.611665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.612185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.612756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.612785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.613362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.613870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.613898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.614254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.614899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.614927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.615285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.615840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.615868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.616374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.616923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.616952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.617631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.618133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.618146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.618744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.619286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.619296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.619809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.620287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.620296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.620856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.621367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.621375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.621935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.622630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.622659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.623154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.623653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.623682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.624183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.624827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.624856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.625339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.625944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.625973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.626642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.627200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.627211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.627838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.628386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.628396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.672 qpair failed and we were unable to recover it. 00:31:20.672 [2024-06-09 23:13:48.629062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.672 [2024-06-09 23:13:48.629691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.629723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.630241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.630845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.630874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.631359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.631755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.631784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.632287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.632893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.632922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.633640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.634168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.634178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.634785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.635330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.635340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.635954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.636632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.636661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.637190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.637776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.637805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.638303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.638810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.638818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.639334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.639921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.639950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.640639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.641153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.641166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.641748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.641974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.641989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.642361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.642724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.642733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.643128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.643584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.643592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.644066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.644667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.644696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.645212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.645805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.645834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.646338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.646711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.646719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.647249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.647848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.647877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.648359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.648851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.648879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.649617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.650086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.650096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.650700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.651219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.651233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.651878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.652616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.652645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.653149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.653786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.653815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.654290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.654695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.654703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.655197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.655855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.655884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.656428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.656856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.656864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.657354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.657861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.657869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.658280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.658874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.658903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.659374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.659955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.659984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.673 qpair failed and we were unable to recover it. 00:31:20.673 [2024-06-09 23:13:48.660615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.673 [2024-06-09 23:13:48.661142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.661152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.661391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.661662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.661672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.662170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.662794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.662823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.663316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.663813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.663822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.664217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.664859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.664888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.665380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.665889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.665918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.666620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.667053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.667064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.667657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.668174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.668185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.668721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.669113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.669124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.669703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.670262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.670272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.670917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.671660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.671688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.672193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.672733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.672762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.673160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.673736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.673765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.674271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.674783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.674812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.675376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.675854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.675883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.676390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.676991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.677020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.677641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.678164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.678175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.678794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.679306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.679316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.679913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.680606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.680635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.681127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.681621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.681650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.682043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.682467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.682476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.682969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.683446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.683454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.683967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.684283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.684290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.684705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.685184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.685193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.685432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.685673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.685686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.686078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.686431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.686439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.686970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.687444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.687453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.687900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.688279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.688286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.688815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.689290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.689298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.674 [2024-06-09 23:13:48.689888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.690261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.674 [2024-06-09 23:13:48.690268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.674 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.690726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.691277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.691288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.691811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.692326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.692334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.692834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.693359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.693367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.693953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.694621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.694650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.695182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.695783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.695812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.696313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.696731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.696760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.697264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.697870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.697899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.698596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.699139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.699150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.699746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.700297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.700307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.700762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.701119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.701127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.701692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.702120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.702130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.702637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.703184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.703195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.703709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.704259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.704270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.704886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.705146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.705161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.705732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.706245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.706256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.706862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.707400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.707415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.708003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.708642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.708671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.709170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.709777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.709807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.710324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.710837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.710865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.711345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.711944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.711973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.712619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.713160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.713170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.713745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.714281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.714292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.714817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.715283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.715291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.675 [2024-06-09 23:13:48.715690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.716163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.675 [2024-06-09 23:13:48.716171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.675 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.716293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.716656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.716665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.717162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.717683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.717690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.718203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.718787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.718816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.719332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.719889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.719897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.720635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.721138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.721148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.721416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.721823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.721831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.722318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.722587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.722595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.723094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.723567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.723575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.724052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.724503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.724511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.724909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.725345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.725352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.725850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.726327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.726334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.726799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.727273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.727281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.727873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.728133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.728148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.728642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.729131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.729142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.729763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.730264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.730274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.730882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.731298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.731309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.731868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.732389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.732400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.732909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.733621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.733649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.734154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.734646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.734675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.735168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.735784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.735813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.736320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.736902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.736930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.737616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.738120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.738131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.738712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.739235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.739245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.739853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.740409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.740420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.740950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.741623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.741651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.742146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.742729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.742757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.743228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.743775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.743804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.744340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.744890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.744920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.676 [2024-06-09 23:13:48.745408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.745886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.676 [2024-06-09 23:13:48.745915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.676 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.746375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.746896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.746925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.747221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.747813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.747842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.748235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.748803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.748832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.749231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.749791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.749820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.750219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.750706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.750735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.751247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.751798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.751827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.752317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.752990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.753019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.753659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.754087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.754097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.754306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.754814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.754822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.755279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.755848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.755877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.756393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.756907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.756936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.757654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.758168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.758178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.758673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.758940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.758955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.759337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.759756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.759765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.760162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.760626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.760655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.761174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.761711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.761741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.762166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.762786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.762815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.763303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.763810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.763818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.764336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.764971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.765000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.765400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.765994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.766023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.766688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.767082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.767092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.767716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.768243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.768253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.768862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.769273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.769284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.769697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.770074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.770082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.770686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.771214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.771225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.771793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.772303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.772313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.772814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.773198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.773206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.773810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.774325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.774335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.677 [2024-06-09 23:13:48.774788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.775052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.677 [2024-06-09 23:13:48.775069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.677 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.775287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.775792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.775801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.776121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.776501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.776509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.777001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.777486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.777493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.778005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.778527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.778535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.779017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.779498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.779506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.780002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.780482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.780490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.781005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.781495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.781503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.782023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.782535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.782544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.782871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.783368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.783375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.783880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.784299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.784307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.784646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.785123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.785133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.785524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.785970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.785977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.786556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.787030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.787037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.787530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.788017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.788025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.788514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.789016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.789024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.789513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.789882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.789889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.790278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.790775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.790783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.791281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.791884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.791912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.792140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.792640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.792669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.793172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.793389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.793416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.793947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.794605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.794637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.795128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.795759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.795788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.796139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.796753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.796782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.797283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.797894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.797923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.798427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.798970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.798978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.799628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.800137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.800147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.800759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.801280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.801291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.801892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.802405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.802413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.802902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.803315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.678 [2024-06-09 23:13:48.803325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.678 qpair failed and we were unable to recover it. 00:31:20.678 [2024-06-09 23:13:48.803908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.804318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.804328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.804846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.805367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.805379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.805867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.806424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.806442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.806810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.807321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.807328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.807811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.808225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.808234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.808733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.809155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.809167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.809756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.810143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.810154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.810641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.811151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.811161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.811753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.812287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.812297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.812700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.813218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.813225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.813738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.814231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.814241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.814839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.815263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.815274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.815772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.816318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.816329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.816914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.817625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.817654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.818152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.818669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.818698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.819212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.819713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.819742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.820302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.820714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.820722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.821229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.821835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.821864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.822362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.822883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.822911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.823621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.824024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.824034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.824654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.825157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.825167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.825750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.826280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.826290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.826872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.827414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.827423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.827899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.828378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.828386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.828843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.829070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.829083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.679 qpair failed and we were unable to recover it. 00:31:20.679 [2024-06-09 23:13:48.829326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.679 [2024-06-09 23:13:48.829564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.829578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-09 23:13:48.830076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.830585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.830593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-09 23:13:48.831076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.831598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.831606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-09 23:13:48.832116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.832598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.832607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-09 23:13:48.833093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.833644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.833674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-09 23:13:48.834176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.834682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.834710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-09 23:13:48.835220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.835756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.835785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-09 23:13:48.836302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.836852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.836861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-09 23:13:48.837317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.837798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.837827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-09 23:13:48.838327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.838627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.838636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-09 23:13:48.839011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.839411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.839420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-09 23:13:48.839796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.840272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.840279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-09 23:13:48.840864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.841616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.680 [2024-06-09 23:13:48.841645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.680 qpair failed and we were unable to recover it. 00:31:20.680 [2024-06-09 23:13:48.842144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.947 [2024-06-09 23:13:48.842757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.947 [2024-06-09 23:13:48.842786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.947 qpair failed and we were unable to recover it. 00:31:20.947 [2024-06-09 23:13:48.843316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.947 [2024-06-09 23:13:48.843901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.947 [2024-06-09 23:13:48.843930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.947 qpair failed and we were unable to recover it. 00:31:20.947 [2024-06-09 23:13:48.844589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.845111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.845122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.845734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.846240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.846250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.846840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.847341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.847351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.847934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.848433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.848444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.848919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.849289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.849297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.849871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.850355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.850365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.850861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.851249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.851260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.851752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.852010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.852026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.852270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.852774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.852783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.853295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.853788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.853796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.854285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.854887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.854917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.855617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.856147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.856157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.856658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.857210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.857221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.857861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.858390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.858400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.858896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.859620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.859649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.860133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.860731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.860760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.861262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.861900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.861929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.862425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.862956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.862964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.863600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.864153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.864164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.864669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.865077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.865089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.865707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.866076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.866086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.866595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.867065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.867072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.867690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.868101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.868111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.868592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.868990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.868998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.869364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.869600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.869608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.870121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.870568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.870577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.871027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.871514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.871522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.871994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.872433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.872441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.948 [2024-06-09 23:13:48.872973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.873163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.948 [2024-06-09 23:13:48.873176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.948 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.873381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.873913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.873921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.874148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.874545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.874553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.875022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.875475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.875483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.875872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.876303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.876310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.876803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.877243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.877250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.877695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.878058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.878067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.878688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.879214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.879224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.879827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.880212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.880223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.880821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.881371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.881381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.882008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.882676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.882705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.882855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.883237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.883246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.883474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.883984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.883992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.884502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.884995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.885003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.885502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.886016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.886026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.886537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.886934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.886943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.887448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.887951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.887958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.888476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.888948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.888957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.889476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.890001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.890008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.890524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.890892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.890901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.891359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.891724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.891732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.892227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.892446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.892459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.892880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.893392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.893400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.893922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.894620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.894649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.895169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.895847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.895876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.896264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.896856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.896885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.897386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.897996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.898026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.898653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.899156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.899167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.899870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.900372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.900383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.949 [2024-06-09 23:13:48.901037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.901696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.949 [2024-06-09 23:13:48.901725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.949 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.902194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.902790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.902819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.903305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.903716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.903745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.904251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.904872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.904901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.905405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.905895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.905924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.906618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.906990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.907001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.907628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.908179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.908189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.908812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.909329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.909339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.909849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.910329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.910337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.911092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.911735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.911764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.912277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.912873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.912902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.913425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.913929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.913937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.914628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.915178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.915189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.915768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.916278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.916288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.916793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.917281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.917289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.917868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.918321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.918330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.918838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.919078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.919094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.919458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.919830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.919838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.920318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.920832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.920839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.921195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.921619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.921627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.922144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.922722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.922751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.923247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.923876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.923905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.924623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.925144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.925154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.925736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.926307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.926318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.926761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.927290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.927298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.927577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.928113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.928120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.928765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.929231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.929241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.929756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.930226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.930237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.930842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.931271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.931281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.950 qpair failed and we were unable to recover it. 00:31:20.950 [2024-06-09 23:13:48.931777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.932302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.950 [2024-06-09 23:13:48.932311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.932705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.933073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.933082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.933690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.934240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.934250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.934848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.935280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.935290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.935803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.936340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.936348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.936923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.937624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.937654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.938169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.938714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.938746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.939265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.939866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.939895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.940620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.941032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.941042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.941688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.942204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.942214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.942824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.943331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.943341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.943984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.944222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.944237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.944838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.945279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.945289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.945806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.946336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.946344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.946903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.947353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.947361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.947951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.948623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.948651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.949145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.949779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.949811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.950311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.950867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.950897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.951407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.952000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.952029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.952666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.953206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.953216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.953825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.954357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.954367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.955024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.955663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.955692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.956198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.956675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.956704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.957254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.957864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.957893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.958392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.958982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.959010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.959398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.960038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.960067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.960727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.961231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.961245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.961721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.962104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.962114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.962712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.963222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.963232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.951 [2024-06-09 23:13:48.963920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.964621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.951 [2024-06-09 23:13:48.964650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.951 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.965175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.965754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.965783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.966349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.966814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.966843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.967209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.967786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.967815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.968200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.968793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.968821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.969302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.969799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.969807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.970330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.970914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.970943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.971623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.972157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.972170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.972696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.973226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.973236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.973900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.974609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.974637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.975152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.975713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.975742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.976255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.976813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.976842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.977269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.977775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.977804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.978325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.978730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.978759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.979129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.979719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.979748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.980256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.980850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.980879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.981376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.981973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.982001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.982520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.983030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.983038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.983537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.983955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.983963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.984390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.984917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.984926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.985409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.985900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.985907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.986329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.986901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.986930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.987613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.988141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.988151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.988744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.989256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.989266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.952 [2024-06-09 23:13:48.989702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.990233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.952 [2024-06-09 23:13:48.990243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.952 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:48.990831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.991357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.991367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:48.991968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.992587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.992623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:48.993167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.993781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.993810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:48.994316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.994856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.994885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:48.995107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.995613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.995622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:48.996132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.996759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.996788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:48.997288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.997667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.997676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:48.998201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.998783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.998811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:48.999298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.999696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:48.999705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.000222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.000817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.000846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.001337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.001988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.002017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.002623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.003202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.003212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.003802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.004308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.004319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.004672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.004818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.004833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.005322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.005814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.005822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.006296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.006698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.006705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.007192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.007664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.007671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.008043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.008628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.008657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.009150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.009723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.009752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.010133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.010507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.010516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.011028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.011492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.011499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.011849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.012323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.012332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.012829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.013265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.013273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.013888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.014608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.014637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.015150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.015719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.015748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.016264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.016828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.016857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.017363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.017966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.017995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.018619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.019007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.019017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.953 [2024-06-09 23:13:49.019330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.019852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.953 [2024-06-09 23:13:49.019860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.953 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.020345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.020956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.020984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.021586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.022130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.022140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.022677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.023184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.023195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.023808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.024355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.024365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.024846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.025373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.025384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.025869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.026373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.026384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.026975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.027630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.027658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.028173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.028672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.028701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.029222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.029694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.029724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.030214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.030834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.030863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.031383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.031986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.032016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.032551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.032903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.032912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.033442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.033878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.033886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.034393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.034811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.034819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.035322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.035801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.035810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.036292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.036814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.036821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.037300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.037665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.037673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.038168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.038728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.038757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.039255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.039873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.039902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.040398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.040957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.040985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.041386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.041901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.041930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.042618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.043119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.043129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.043363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.043845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.043854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.044212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.044815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.044843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.045206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.045436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.045449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.045933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.046604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.046633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.047143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.047785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.047814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.048328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.048931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.048960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.049594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.050164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.954 [2024-06-09 23:13:49.050175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.954 qpair failed and we were unable to recover it. 00:31:20.954 [2024-06-09 23:13:49.050762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.051156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.051166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.051766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.052183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.052194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.052794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.053337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.053348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.053957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.054605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.054634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.055030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.055640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.055669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.056209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.056797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.056825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.057340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.057848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.057857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.058338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.058917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.058947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.059634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.060164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.060174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.060862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.061415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.061427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.061826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.062351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.062358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.062864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.063303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.063311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.063811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.064287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.064295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.064649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.065114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.065122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.065706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.066198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.066208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.066766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.067230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.067240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.067707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.068235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.068245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.068776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.069189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.069200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.069691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.069956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.069972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.070506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.071041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.071049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.071530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.072018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.072026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.072564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.073072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.073081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.073522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.074035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.074042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.074325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.074718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.074726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.075224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.075786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.075815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.076311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.076657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.076667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.077153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.077755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.077785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.078252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.078799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.078828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.079320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.079911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.079940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.955 qpair failed and we were unable to recover it. 00:31:20.955 [2024-06-09 23:13:49.080609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.955 [2024-06-09 23:13:49.081157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.081167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.081820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.082346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.082357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.082871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.083298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.083308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.083845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.084213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.084221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.084676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.085189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.085199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.085720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.086230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.086240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.086832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.087386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.087397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.087956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.088612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.088641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.089044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.089638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.089667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.090188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.090856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.090885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.091578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.092000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.092011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.092520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.093003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.093011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.093512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.093891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.093898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.094408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.094862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.094870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.095350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.095895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.095904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.096387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.096953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.096982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.097611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.098116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.098126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.098667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.099178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.099188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.099803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.100350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.100361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.100868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.101396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.101412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.101913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.102624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.102652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.103133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.103704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.103733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.104251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.104805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.104833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.105332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.105735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.105764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.106153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.106643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.106671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.956 qpair failed and we were unable to recover it. 00:31:20.956 [2024-06-09 23:13:49.107157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.107699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.956 [2024-06-09 23:13:49.107728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.957 qpair failed and we were unable to recover it. 00:31:20.957 [2024-06-09 23:13:49.108203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.108715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.108747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.957 qpair failed and we were unable to recover it. 00:31:20.957 [2024-06-09 23:13:49.109255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.109863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.109892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.957 qpair failed and we were unable to recover it. 00:31:20.957 [2024-06-09 23:13:49.110311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.110937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.110966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.957 qpair failed and we were unable to recover it. 00:31:20.957 [2024-06-09 23:13:49.111318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.111907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.111916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.957 qpair failed and we were unable to recover it. 00:31:20.957 [2024-06-09 23:13:49.112148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.112636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.112665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.957 qpair failed and we were unable to recover it. 00:31:20.957 [2024-06-09 23:13:49.113216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.113784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.113812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.957 qpair failed and we were unable to recover it. 00:31:20.957 [2024-06-09 23:13:49.114289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.114708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.114716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.957 qpair failed and we were unable to recover it. 00:31:20.957 [2024-06-09 23:13:49.115231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.115820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.115849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.957 qpair failed and we were unable to recover it. 00:31:20.957 [2024-06-09 23:13:49.116329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.116980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.117008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.957 qpair failed and we were unable to recover it. 00:31:20.957 [2024-06-09 23:13:49.117628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.118155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.118165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.957 qpair failed and we were unable to recover it. 00:31:20.957 [2024-06-09 23:13:49.118744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.119257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.957 [2024-06-09 23:13:49.119272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:20.957 qpair failed and we were unable to recover it. 00:31:20.957 [2024-06-09 23:13:49.119859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.224 [2024-06-09 23:13:49.120388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.224 [2024-06-09 23:13:49.120400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.224 qpair failed and we were unable to recover it. 00:31:21.224 [2024-06-09 23:13:49.121023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.224 [2024-06-09 23:13:49.121648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.224 [2024-06-09 23:13:49.121677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.224 qpair failed and we were unable to recover it. 00:31:21.224 [2024-06-09 23:13:49.122213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.224 [2024-06-09 23:13:49.122706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.224 [2024-06-09 23:13:49.122735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.224 qpair failed and we were unable to recover it. 00:31:21.224 [2024-06-09 23:13:49.123256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.224 [2024-06-09 23:13:49.123875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.224 [2024-06-09 23:13:49.123904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.224 qpair failed and we were unable to recover it. 00:31:21.224 [2024-06-09 23:13:49.124289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.224 [2024-06-09 23:13:49.124855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.224 [2024-06-09 23:13:49.124885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.224 qpair failed and we were unable to recover it. 00:31:21.224 [2024-06-09 23:13:49.125379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.224 [2024-06-09 23:13:49.125927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.125957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.126623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.127048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.127060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.127662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.128188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.128198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.128702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.129250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.129261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.129976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.130658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.130691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.131045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.131675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.131704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.132173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.132723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.132756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.133274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.133866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.133895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.134427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.134931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.134940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.135596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.136147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.136158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.136694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.137234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.137244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.137718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.138107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.138118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.138341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.138621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.138635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.139131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.139493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.139500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.139909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.140394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.140416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.140899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.141426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.141442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.141921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.142433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.142441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.142946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.143433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.143441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.143940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.144417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.144426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.144904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.145422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.145430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.145698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.146167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.146175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.146618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.146951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.146958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.147436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.147849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.147856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.148358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.148775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.148782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.149298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.149762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.149770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.150260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.150757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.150786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.151272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.151887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.151916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.152410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.152807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.152836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.153355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.153904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.153933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.225 qpair failed and we were unable to recover it. 00:31:21.225 [2024-06-09 23:13:49.154627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.155061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.225 [2024-06-09 23:13:49.155071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.155653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.156181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.156192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.156810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.157328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.157339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.157923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.158312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.158320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.158930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.159630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.159659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.160164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.160792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.160821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.161334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.161866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.161895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.162628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.163179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.163189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.163644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.164036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.164047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.164684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.165184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.165194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.165670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.166171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.166181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.166736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.167147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.167157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.167708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.168197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.168208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.168686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.169055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.169066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.169580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.170093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.170101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.170688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.171155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.171165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.171749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.172255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.172266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.172685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.173065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.173075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.173680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.174180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.174191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.174727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.175279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.175289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.175794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.176318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.176326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.176792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.177410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.177418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.177938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.178622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.178651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.179119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.179603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.179632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.180095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.180693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.180722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.181233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.181793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.181823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.182297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.182816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.182825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.183335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.183747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.183776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.184266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.184466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.184479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.226 qpair failed and we were unable to recover it. 00:31:21.226 [2024-06-09 23:13:49.184965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.226 [2024-06-09 23:13:49.185617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.185646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.186113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.186426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.186443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.186921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.187387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.187395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.187883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.188359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.188368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.188868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.189254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.189264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.189798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.190345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.190356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.190943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.191615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.191644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.192194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.192629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.192658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.193177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.193798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.193827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.194352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.194932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.194962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.195596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.196095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.196105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.196712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.197222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.197232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.197722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.198233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.198244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.198872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.199229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.199240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.199828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.200338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.200348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.200984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.201637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.201666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.202179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.202788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.202816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.203332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.203914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.203943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.204583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.205128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.205138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.205790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.206337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.206348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.206853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.207399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.207415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.207906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.208612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.208642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.209163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.209374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.209383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.209944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.210605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.210634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.211147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.211640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.211669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.211896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.212241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.212250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.212758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.213285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.213293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.213787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.214266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.214274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.227 qpair failed and we were unable to recover it. 00:31:21.227 [2024-06-09 23:13:49.214648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.215194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.227 [2024-06-09 23:13:49.215205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.215815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.216331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.216341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.216915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.217577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.217606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.218114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.218645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.218674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.219187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.219810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.219840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.220392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.220957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.220985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.221352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.221943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.221972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.222176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.222805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.222834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.223201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.223780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.223809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.224283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.224887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.224916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.225559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.226106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.226117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.226705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.227252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.227263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.227863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.228410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.228422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.229045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.229638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.229667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.230176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.230779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.230807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.231322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.231874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.231903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.232290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.232789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.232797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.233302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.233897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.233926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.234562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.235116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.235126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.235714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.236266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.236276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.236873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.237424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.237443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.237928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.238438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.238446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.238916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.239330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.239338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.239825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.240340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.240348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.240862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.241384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.241391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.241976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.242578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.242608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.243121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.243703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.243731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.244230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.244810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.244839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.245318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.245868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.245897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.228 [2024-06-09 23:13:49.246372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.246989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.228 [2024-06-09 23:13:49.247019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.228 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.247385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.247819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.247848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.248344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.248913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.248942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.249561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.250093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.250103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.250598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.251143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.251154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.251758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.252306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.252316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.252830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.253309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.253317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.253786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.254306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.254314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.254891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.255561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.255590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.256099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.256718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.256747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.256979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.257482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.257492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.257872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.258383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.258391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.258909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.259323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.259330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.259839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.260321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.260329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.260799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.261317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.261325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.261942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.262560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.262589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.263098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.263339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.263353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.263876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.264398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.264409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.265030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.265621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.265650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.266160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.266646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.266675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.267185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.267803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.267832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.268306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.268888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.268917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.269426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.269948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.269956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.270473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.270970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.270977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.229 qpair failed and we were unable to recover it. 00:31:21.229 [2024-06-09 23:13:49.271484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.229 [2024-06-09 23:13:49.271968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.271976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.272489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.273005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.273013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.273501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.274001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.274008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.274474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.274989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.274996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.275467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.275966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.275973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.276479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.276950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.276958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.277443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.277960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.277971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.278477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.278997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.279005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.279520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.279934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.279941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.280169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.280661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.280670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.281157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.281672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.281679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.282157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.282667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.282676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.283181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.283789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.283818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.284333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.284905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.284934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.285564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.286078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.286088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.286649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.287195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.287205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.287820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.288369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.288383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.288952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.289599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.289628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.290174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.290784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.290814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.291325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.291894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.291922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.292282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.292890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.292918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.293578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.294121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.294131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.294718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.295224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.295234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.295471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.295967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.295975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.296203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.296695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.296703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.297211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.297783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.297812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.298311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.298864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.298897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.299412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.299842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.299850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.300366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.300830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.300858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.230 qpair failed and we were unable to recover it. 00:31:21.230 [2024-06-09 23:13:49.301367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.301963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.230 [2024-06-09 23:13:49.301992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.302587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.303096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.303106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.303716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.304264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.304274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.304877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.305617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.305646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.306162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.306361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.306375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.306730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.307105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.307116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.307627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.308136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.308144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.308740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.309288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.309299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.309801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.310320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.310328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.310729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.311202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.311210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.311827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.312372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.312382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.312984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.313606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.313635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.314150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.314718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.314746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.315126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.315743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.315772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.316287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.316802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.316811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.317331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.317898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.317927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.318562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.319067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.319077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.319661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.320205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.320216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.320782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.321327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.321337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.321832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.322360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.322367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.322858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.323398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.323414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.323875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.324579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.324607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.325119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.325738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.325767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.326291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.326812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.326821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.327080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.327679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.327708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.327942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.328451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.328461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.328970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.329493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.329501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.330011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.330372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.330380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.330841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.331358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.331366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.231 qpair failed and we were unable to recover it. 00:31:21.231 [2024-06-09 23:13:49.331854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.231 [2024-06-09 23:13:49.332324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.332332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.332905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.333409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.333421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.333872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.334398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.334410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.334977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.335566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.335594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.336097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.336725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.336754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.337280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.337878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.337907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.338425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.338843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.338850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.339363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.339882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.339890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.340378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.340936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.340965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.341574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.342068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.342079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.342690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.343185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.343196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.343801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.344346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.344356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.344941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.345557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.345587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.346097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.346715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.346745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.347260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.347885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.347915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.348571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.348935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.348945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.349454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.349823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.349831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.350307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.350811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.350818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.351325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.351710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.351718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.352226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.352838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.352868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.353415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.353933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.353941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.354445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.354960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.354967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.355476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.355995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.356003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.356514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.357034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.357042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.357529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.358047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.358055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.358447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.358922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.358930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.359438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.359957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.359964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.360472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.360949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.360956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.361444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.361864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.361871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.232 qpair failed and we were unable to recover it. 00:31:21.232 [2024-06-09 23:13:49.362389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.362780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.232 [2024-06-09 23:13:49.362788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.363276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.363841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.363870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.364374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.364837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.364866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.365094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.365598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.365607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.366099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.366667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.366696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.367170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.367696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.367725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.368238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.368855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.368884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.369381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.369991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.370019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.370406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.370981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.371009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.371576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.372123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.372133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.372748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.373294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.373305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.373888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.374560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.374589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.375104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.375341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.375353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.375873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.376111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.376119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.376600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.377073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.377080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.377571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.378085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.378093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.378695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.379243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.379254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.379774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.380319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.380329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.380837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.381347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.381355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.381932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.382566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.382595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.383073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.383636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.383664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.384178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.384791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.384819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.385330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.385804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.385812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.386300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.386899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.386929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.387561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.388106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.388117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.388689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.389219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.233 [2024-06-09 23:13:49.389229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.233 qpair failed and we were unable to recover it. 00:31:21.233 [2024-06-09 23:13:49.389839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.234 [2024-06-09 23:13:49.390380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.234 [2024-06-09 23:13:49.390391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.234 qpair failed and we were unable to recover it. 00:31:21.234 [2024-06-09 23:13:49.390974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.234 [2024-06-09 23:13:49.391651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.234 [2024-06-09 23:13:49.391681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.234 qpair failed and we were unable to recover it. 00:31:21.234 [2024-06-09 23:13:49.392192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.234 [2024-06-09 23:13:49.392805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.234 [2024-06-09 23:13:49.392834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.234 qpair failed and we were unable to recover it. 00:31:21.234 [2024-06-09 23:13:49.393385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.234 [2024-06-09 23:13:49.393981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.234 [2024-06-09 23:13:49.394010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.234 qpair failed and we were unable to recover it. 00:31:21.234 [2024-06-09 23:13:49.394635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.234 [2024-06-09 23:13:49.395144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.234 [2024-06-09 23:13:49.395155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.234 qpair failed and we were unable to recover it. 00:31:21.234 [2024-06-09 23:13:49.395743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.234 [2024-06-09 23:13:49.396289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.234 [2024-06-09 23:13:49.396299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.234 qpair failed and we were unable to recover it. 00:31:21.234 [2024-06-09 23:13:49.396803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.397279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.397288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.501 qpair failed and we were unable to recover it. 00:31:21.501 [2024-06-09 23:13:49.397853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.398397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.398414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.501 qpair failed and we were unable to recover it. 00:31:21.501 [2024-06-09 23:13:49.398883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.399243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.399251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.501 qpair failed and we were unable to recover it. 00:31:21.501 [2024-06-09 23:13:49.399848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.400374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.400384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.501 qpair failed and we were unable to recover it. 00:31:21.501 [2024-06-09 23:13:49.400991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.401629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.401658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.501 qpair failed and we were unable to recover it. 00:31:21.501 [2024-06-09 23:13:49.401883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.402148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.402158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.501 qpair failed and we were unable to recover it. 00:31:21.501 [2024-06-09 23:13:49.402539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.403044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.403051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.501 qpair failed and we were unable to recover it. 00:31:21.501 [2024-06-09 23:13:49.403546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.404067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.404075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.501 qpair failed and we were unable to recover it. 00:31:21.501 [2024-06-09 23:13:49.404454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.404975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.404982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.501 qpair failed and we were unable to recover it. 00:31:21.501 [2024-06-09 23:13:49.405498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.405860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.405868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.501 qpair failed and we were unable to recover it. 00:31:21.501 [2024-06-09 23:13:49.406221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.406726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.406734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.501 qpair failed and we were unable to recover it. 00:31:21.501 [2024-06-09 23:13:49.407218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.407593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.501 [2024-06-09 23:13:49.407622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.408116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.408732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.408767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.409242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.409442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.409457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.409700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.410187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.410195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.410696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.411174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.411183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.411781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.412212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.412223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.412832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.413332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.413343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.413920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.414613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.414642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.415142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.415708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.415737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.416284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.416789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.416798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.417308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.417793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.417802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.418310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.418903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.418933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.419293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.419841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.419850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.420369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.420825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.420854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.421366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.421960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.421989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.422635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.423132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.423143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.423731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.424156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.424167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.424768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.425278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.425289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.425776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.426178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.426186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.426782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.427321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.427332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.427844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.428356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.428363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.428936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.429612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.429641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.430143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.430758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.430787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.431304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.431677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.431686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.432222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.432832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.432861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.433373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.433930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.433958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.434666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.435056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.435066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.435650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.436159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.436172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.436760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.437262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.437272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.502 qpair failed and we were unable to recover it. 00:31:21.502 [2024-06-09 23:13:49.437910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.502 [2024-06-09 23:13:49.438612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.438641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.439152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.439771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.439800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.440319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.440859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.440888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.441386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.441851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.441880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.442359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.442926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.442954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.443619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.443879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.443895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.444272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.444695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.444703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.445093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.445664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.445693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.446201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.446616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.446648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.446878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.447372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.447380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.447873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.448396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.448408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.448807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.449323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.449331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.449896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.450561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.450590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.451101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.451710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.451739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.452253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.452859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.452888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.453378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.453978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.454007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.454578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.455125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.455135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.455598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.456143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.456153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.456756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.457263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.457276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.457852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.458408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.458419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.458974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.459575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.459604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.460123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.460742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.460770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.461328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.461873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.461902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.462400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.462978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.463007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.463591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.464130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.464140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.464617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.465014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.465025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.465248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.465464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.465475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.465975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.466352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.466359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.466836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.467356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.467367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.503 qpair failed and we were unable to recover it. 00:31:21.503 [2024-06-09 23:13:49.467878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.503 [2024-06-09 23:13:49.468359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.468367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.468977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.469618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.469646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.470082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.470687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.470716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.471230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.471807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.471835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.472348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.472931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.472959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.473596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.474109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.474120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.474700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.475246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.475258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.475859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.476360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.476370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.476915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.477565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.477594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.478115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.478726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.478754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.479257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.479870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.479898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.480425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.480936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.480944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.481558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.482068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.482078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.482546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.482958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.482965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.483452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.483688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.483703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.483928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.484245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.484253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.484595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.485103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.485110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.485627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.485984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.485993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.486471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.486943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.486950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.487462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.487984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.487992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.488505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.488737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.488748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.489275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.489786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.489794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.490021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.490466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.490475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.490994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.491511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.491519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.491904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.492419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.492427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.492830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.493273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.493281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.493851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.494324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.494331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.494731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.495201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.495209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.495694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.496243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.496253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.504 qpair failed and we were unable to recover it. 00:31:21.504 [2024-06-09 23:13:49.496865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.504 [2024-06-09 23:13:49.497373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.497384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.497968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.498562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.498591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.499068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.499639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.499668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.500178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.500787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.500816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.501292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.501816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.501824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.502364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.502909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.502938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.503581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.504128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.504138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.504720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.505220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.505230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.505840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.506358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.506368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.507051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.507650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.507679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.508194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.508629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.508657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.509136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.509753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.509782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.510293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.510799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.510807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.511296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.511769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.511777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.512287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.512888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.512917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.513544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.514092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.514102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.514708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.515253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.515263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.515692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.516080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.516092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.516663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.517211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.517221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.517834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.518376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.518387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.519005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.519613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.519642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.520145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.520749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.520778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.521135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.521747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.521775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.522002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.522491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.522500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.522889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.523362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.523370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.523866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.524343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.524351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.524866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.525389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.525397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.525995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.526634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.526662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.527175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.527785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.527814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.505 qpair failed and we were unable to recover it. 00:31:21.505 [2024-06-09 23:13:49.528310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.528897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.505 [2024-06-09 23:13:49.528926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.529560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.530093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.530103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.530716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.531264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.531275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.531896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.532575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.532604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.533104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.533717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.533747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.534252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.534710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.534739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.535250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.535858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.535887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.536393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.536997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.537026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.537607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.538115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.538126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.538726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.538986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.539003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.539494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.539869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.539876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.540362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.540597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.540610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.541106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.541468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.541476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.541989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.542509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.542517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.542977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.543448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.543456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.543976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.544495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.544503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.545015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.545486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.545494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.546009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.546562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.546569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.547081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.547448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.547456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.547676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.548183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.548191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.548666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.549181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.549189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.549798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.550311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.550321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.550898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.551427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.551443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.551960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.552475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.552483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.506 qpair failed and we were unable to recover it. 00:31:21.506 [2024-06-09 23:13:49.552969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.506 [2024-06-09 23:13:49.553480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.553489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.553869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.554384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.554391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.554892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.555411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.555419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.555779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.556248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.556255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.556837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.557387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.557397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.557969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.558628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.558657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.559141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.559714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.559742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.560252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.560861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.560890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.561389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.561884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.561913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.562578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.563123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.563133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.563708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.564205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.564215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.564825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.565321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.565331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.565944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.566566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.566595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.566976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.567504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.567513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.567994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.568358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.568366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.568876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.569355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.569363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.569839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.570386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.570397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.570891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.571372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.571381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.571727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.572162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.572174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.572748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.573243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.573253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.573847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.574344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.574354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.574924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.575569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.575597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.576117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.576595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.576623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.577134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.577754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.577783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.578323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.578670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.578678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.579164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.579778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.579808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.580323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.580843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.580851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.581213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.581804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.581832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.582343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.582930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.582958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.507 qpair failed and we were unable to recover it. 00:31:21.507 [2024-06-09 23:13:49.583567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.507 [2024-06-09 23:13:49.584068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.584078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.584690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.585236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.585246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.585850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.586352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.586362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.586978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.587570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.587600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.588148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.588719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.588747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.589262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.589835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.589864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.590379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.590969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.590998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.591591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.592135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.592145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.592724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.593229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.593239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.593874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.594139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.594156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.594652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.595196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.595207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.595790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.596292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.596303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.596805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.597212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.597220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.597811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.598184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.598194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.598759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.599271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.599282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.599813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.600330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.600339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.600906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.601410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.601421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.601909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.602384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.602392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.602859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.603415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.603427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.604023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.604659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.604691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.605204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.605778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.605806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.606322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.606792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.606820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.607288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.607708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.607717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.608205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.608813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.608843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.609357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.609973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.610002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.610627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.611142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.611152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.611765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.612261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.612271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.612855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.613406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.613417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.613983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.614593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.614622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.508 qpair failed and we were unable to recover it. 00:31:21.508 [2024-06-09 23:13:49.615131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.615748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.508 [2024-06-09 23:13:49.615780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.616160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.616656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.616685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.616914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.617129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.617140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.617636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.618158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.618166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.618688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.619036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.619043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.619644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.620189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.620199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.620783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.621283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.621293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.621795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.622310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.622318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.622690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.623162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.623170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.623773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.624322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.624332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.624819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.625338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.625350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.625837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.626384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.626394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.626977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.627408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.627420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.627973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.628610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.628639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.629132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.629696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.629725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.630242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.630743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.630772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.631288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.631893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.631922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.632125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.632596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.632604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.633095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.633704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.633732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.634250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.634839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.634867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.635375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.635991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.636023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.636562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.637065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.637076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.637686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.638187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.638197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.638797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.639295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.639305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.639805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.640328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.640336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.640794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.641020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.641036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.641389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.641912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.641921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.642582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.643131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.643141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.643754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.644299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.644310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.509 qpair failed and we were unable to recover it. 00:31:21.509 [2024-06-09 23:13:49.644801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.645279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.509 [2024-06-09 23:13:49.645286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.645761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.646278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.646285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.646777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.647304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.647312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.647808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.648169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.648179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.648783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.649171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.649183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.649771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.650319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.650329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.650870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.651387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.651395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.651998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.652588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.652616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.653128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.653693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.653722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.654267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.654861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.654890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.655399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.656023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.656052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.656650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.657196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.657207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.657822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.658084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.658100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.658689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.658953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.658970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.659334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.659839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.659847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.660354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.660824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.660832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.661298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.661791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.661798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.662191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.662794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.662823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.663057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.663587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.663596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.664113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.664723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.664752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.665232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.665741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.665770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.666268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.666763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.666792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.667316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.667866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.667896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.668413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.668968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.668997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.669639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.670191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.670202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.510 qpair failed and we were unable to recover it. 00:31:21.510 [2024-06-09 23:13:49.670791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.510 [2024-06-09 23:13:49.671338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.511 [2024-06-09 23:13:49.671348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.511 qpair failed and we were unable to recover it. 00:31:21.511 [2024-06-09 23:13:49.671940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.511 [2024-06-09 23:13:49.672595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.511 [2024-06-09 23:13:49.672624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.511 qpair failed and we were unable to recover it. 00:31:21.511 [2024-06-09 23:13:49.673138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.673747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.673777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-06-09 23:13:49.674281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.674863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.674892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-06-09 23:13:49.675393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.676011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.676040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-06-09 23:13:49.676644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.677192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.677202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-06-09 23:13:49.677806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.678331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.678342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-06-09 23:13:49.678934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.679620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.679649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-06-09 23:13:49.680005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.680619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.680647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-06-09 23:13:49.681007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.681490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.681498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-06-09 23:13:49.681993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.682488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.682496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-06-09 23:13:49.682985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.683504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.683512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-06-09 23:13:49.683991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.684503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.684511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-06-09 23:13:49.685037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.685554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.685562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.778 qpair failed and we were unable to recover it. 00:31:21.778 [2024-06-09 23:13:49.686085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.778 [2024-06-09 23:13:49.686601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.686609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.687114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.687677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.687707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.688202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.688763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.688792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.689304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.689794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.689803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.690315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.690777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.690784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.691291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.691672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.691681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.692177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.692745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.692774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.693138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.693595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.693624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.694136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.694711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.694740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.695100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.695574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.695582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.696078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.696690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.696719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.697115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.697642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.697650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.698165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.698723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.698752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.699265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.699856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.699885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.700389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.700961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.700989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.701593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.702093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.702103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.702729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.703098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.703108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.703689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.704189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.704200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.704785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.705178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.705188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.705414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.705943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.705951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.706563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.707110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.707122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.707721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.708266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.708276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.708856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.709353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.709363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.709941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.710564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.710593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.710992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.711611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.711640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.712119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.712726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.712755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.713251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.713861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.713890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.714408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.714971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.715000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.715605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.716151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.716161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.779 qpair failed and we were unable to recover it. 00:31:21.779 [2024-06-09 23:13:49.716815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.779 [2024-06-09 23:13:49.717367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.717377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.717961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.718570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.718599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.719111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.719725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.719754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.720298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.720767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.720775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.721286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.721763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.721791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.722155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.722748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.722777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.723294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.723773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.723781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.724297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.724772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.724780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.725287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.725801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.725809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.726276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.726875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.726904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.727381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.727940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.727969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.728563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.729110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.729120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.729728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.730251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.730262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.730851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.731398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.731418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.731972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.732610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.732639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.733115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.733599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.733609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.734133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.734635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.734643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.735139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.735707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.735736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.736260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.736859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.736887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.737364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.737979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.738008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.738604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.739071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.739081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.739669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.740169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.740179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.740813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.741311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.741322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.741816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.742302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.742309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.742899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.743578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.743607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.743972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.744482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.744491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.744980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.745458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.745466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.745980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.746454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.746462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.746697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.747050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.747058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.780 qpair failed and we were unable to recover it. 00:31:21.780 [2024-06-09 23:13:49.747555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.748078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.780 [2024-06-09 23:13:49.748085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.748603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.749121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.749129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.749638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.750156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.750163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.750660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.751199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.751210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.751794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.752337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.752348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.753031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.753671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.753700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.754214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.754794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.754824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.755299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.755857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.755886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.756383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.756881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.756910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.757566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.758107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.758117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.758372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.758859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.758867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.759046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.759528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.759536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.759763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.760254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.760262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.760835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.761306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.761315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.761796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.762269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.762277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.762778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.763347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.763358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.763931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.764548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.764577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.765097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.765720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.765749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.766213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.766784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.766813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.767327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.767941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.767970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.768385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.768963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.768992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.769594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.770140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.770150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.770730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.771196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.771207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.771826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.772332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.772341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.772914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.773563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.773592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.774102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.774681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.774713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.775226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.775841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.775869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.776387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.777002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.777031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.777627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.778125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.778135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.781 qpair failed and we were unable to recover it. 00:31:21.781 [2024-06-09 23:13:49.778733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.781 [2024-06-09 23:13:49.779234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.779244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.779858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.780367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.780378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.780953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.781564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.781593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.782094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.782701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.782729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.783202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.783679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.783690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.784215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.784691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.784699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.785220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.785681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.785713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.786209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.786407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.786421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.786923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.787197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.787205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.787784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.788327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.788338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.788910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.789411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.789422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.789920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.790561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.790590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.791146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.791719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.791748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.792258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.792860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.792889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.793396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.794033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.794062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.794654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.795153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.795163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.795761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.796130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.796143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.796733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.797279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.797290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.797807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.798281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.798288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.798797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.799319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.799327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.799892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.800562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.800591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.801104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.801719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.801748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.802216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.802755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.802784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.803289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.803820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.803829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.804337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.804900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.804929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.805565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.806111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.806121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.806689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.807189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.807200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.807795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.808340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.808351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.782 qpair failed and we were unable to recover it. 00:31:21.782 [2024-06-09 23:13:49.808923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.809562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.782 [2024-06-09 23:13:49.809591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.810107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.810711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.810740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.811202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.811777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.811806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.812302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.812795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.812803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.813318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.813881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.813910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.814139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.814494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.814503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.814988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.815509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.815518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.816077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.816552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.816560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.817066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.817582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.817590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.818105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.818696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.818725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.819219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.819834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.819864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.820365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.820935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.820963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.821575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.822119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.822129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.822738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.823284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.823294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.823777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.824246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.824254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.824837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.825384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.825394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.825851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.826355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.826366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.826966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.827598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.827626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.828144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.828758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.828787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.829298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.829772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.829781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.830054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.830627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.830656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.831029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.831257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.831270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.831761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.831994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.832005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.832376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.832885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.832893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.833416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.833784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.783 [2024-06-09 23:13:49.833794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.783 qpair failed and we were unable to recover it. 00:31:21.783 [2024-06-09 23:13:49.834003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.834220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.834232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.834701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.835023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.835030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.835527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.836000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.836008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.836526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.837049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.837057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.837567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.838082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.838089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.838595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.839063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.839072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.839561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.839930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.839938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.840445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.840920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.840928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.841429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.841942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.841949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.842418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.842899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.842907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.843378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.843891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.843899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.844408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.844906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.844914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.845610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.846114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.846124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.846724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.847268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.847278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.847801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.848324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.848331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.848826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.849340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.849348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.849914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.850412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.850423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.850872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.851367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.851377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.851952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.852594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.852623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.853138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.853754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.853783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.854294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.854662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.854671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.854889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.855328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.855336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.855823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.856295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.856302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.856867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.857336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.857343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.857819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.858082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.858090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.858677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.859223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.859235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.859820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.860411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.860422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.860951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.861571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.861600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.862115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.862734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.862763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.784 qpair failed and we were unable to recover it. 00:31:21.784 [2024-06-09 23:13:49.863273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.784 [2024-06-09 23:13:49.863875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.863904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.864400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.865002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.865031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.865410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.865745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.865774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.866135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.866354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.866365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.866865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.867268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.867276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.867859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.868410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.868422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.868983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.869591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.869620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.870139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.870768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.870796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.871315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.871763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.871792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.872274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.872860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.872889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.873399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.873998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.874028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.874634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.875135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.875146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.875760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.876272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.876282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.876763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.877241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.877248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.877857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.878411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.878422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.879039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.879674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.879703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.880251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.880765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.880794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.881292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.881884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.881913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.882593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.882855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.882871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.883123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.883609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.883620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.884089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.884566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.884574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.885064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.885587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.885595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.886105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.886717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.886746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.887258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.887866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.887895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.888410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.888996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.889025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.889610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.890147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.890157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.890767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.891315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.891326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.891811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.892356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.892366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.892967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.893557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.893587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.785 qpair failed and we were unable to recover it. 00:31:21.785 [2024-06-09 23:13:49.894074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.785 [2024-06-09 23:13:49.894690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.894720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.895230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.895850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.895880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.896383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.896973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.897002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.897609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.898158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.898169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.898739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.899289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.899299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.899814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.900337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.900345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.900946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.901588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.901617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.902137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.902737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.902767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.903269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.903878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.903907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.904598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.905068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.905079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.905686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.906193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.906204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.906810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.907311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.907322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.907820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.908353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.908362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.908801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.909349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.909360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.909948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.910629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.910658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.911170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.911786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.911815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.912326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.912895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.912924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.913519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.914657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.914675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.915182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.915664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.915673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.916181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.916423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.916436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.916954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.917154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.917164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.917652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.918126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.918134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.918605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.919098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.919105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.919710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.920124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.920133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.920731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.921279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.921290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.921801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.922174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.922182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.922761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.923265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.923276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.923851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.924395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.924411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.924997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.925627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.925655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.786 [2024-06-09 23:13:49.926159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.926778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.786 [2024-06-09 23:13:49.926807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.786 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.927323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.927802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.927812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.928328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.928957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.928986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.929589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.930138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.930148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.930758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.931307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.931318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.931718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.932195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.932203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.932807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.933204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.933214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.933726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.934265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.934278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.934608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.935131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.935141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.935757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.936303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.936314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.936805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.937330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.937338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.937885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.938398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.938414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.938737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.939284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.939296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.939581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.940098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.940106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.940750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.941298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.941308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.941791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.942312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.942321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.942810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.943328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.943337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.943906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.944411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.944425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.944986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.945588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.945617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.946117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.946686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.946715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.947187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.947717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.947746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.948257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.948889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.948918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.949575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.950123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.787 [2024-06-09 23:13:49.950133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:21.787 qpair failed and we were unable to recover it. 00:31:21.787 [2024-06-09 23:13:49.950720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.951186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.951197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.055 qpair failed and we were unable to recover it. 00:31:22.055 [2024-06-09 23:13:49.951772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.952279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.952291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.055 qpair failed and we were unable to recover it. 00:31:22.055 [2024-06-09 23:13:49.952779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.953255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.953263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.055 qpair failed and we were unable to recover it. 00:31:22.055 [2024-06-09 23:13:49.953862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.954361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.954372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.055 qpair failed and we were unable to recover it. 00:31:22.055 [2024-06-09 23:13:49.955007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.955601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.955634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.055 qpair failed and we were unable to recover it. 00:31:22.055 [2024-06-09 23:13:49.956152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.956760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.956788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.055 qpair failed and we were unable to recover it. 00:31:22.055 [2024-06-09 23:13:49.957305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.957679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.957708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.055 qpair failed and we were unable to recover it. 00:31:22.055 [2024-06-09 23:13:49.958070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.958437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.958445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.055 qpair failed and we were unable to recover it. 00:31:22.055 [2024-06-09 23:13:49.958781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.959240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.959248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.055 qpair failed and we were unable to recover it. 00:31:22.055 [2024-06-09 23:13:49.959757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.960269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.960276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.055 qpair failed and we were unable to recover it. 00:31:22.055 [2024-06-09 23:13:49.960879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.961565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.055 [2024-06-09 23:13:49.961594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.055 qpair failed and we were unable to recover it. 00:31:22.055 [2024-06-09 23:13:49.961960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.962315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.962323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.962532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.963019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.963027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.963546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.964018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.964026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.964491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.965005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.965015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.965510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.965980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.965987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.966483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.966869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.966876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.967370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.967854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.967862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.968371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.968858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.968866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.969374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.969876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.969905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.970640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.971181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.971191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.971792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.972208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.972218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.972829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.973372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.973382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.973956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.974623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.974652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.975154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.975690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.975720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.976229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.976734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.976763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.977274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.977855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.977884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.978408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.978976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.979006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.979410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.979978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.980007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.980612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.981116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.981126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.981737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.982234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.982244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.982838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.983383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.983393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.984043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.984425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.984444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.984950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.985592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.985621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.986095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.986703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.986732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.987254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.987852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.987881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.988578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.989124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.989134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.989745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.990289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.990299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.990684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.991156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.991164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.991755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.992305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.992315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.056 qpair failed and we were unable to recover it. 00:31:22.056 [2024-06-09 23:13:49.992822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.056 [2024-06-09 23:13:49.993342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:49.993349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:49.993923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:49.994561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:49.994589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:49.995103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:49.995616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:49.995644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:49.996163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:49.996410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:49.996424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:49.996910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:49.997573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:49.997601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:49.998118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:49.998764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:49.998793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:49.999302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:49.999805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:49.999813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.000325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.000872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.000901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.001231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.001826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.001855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.002342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.003345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.003373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.003907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.004386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.004395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.004897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.005372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.005380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.005967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.006630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.006659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.007179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.007789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.007818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.008333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.008915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.008944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.009636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.009896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.009911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.010272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.010492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.010503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.011034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.011451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.011460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.011982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.012454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.012462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.012946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.013463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.013471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.013804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.014305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.014312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.014808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.015329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.015338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.015824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.016235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.016244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.016829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.017382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.017392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.017890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.018415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.018424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.019009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.019625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.019654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.020119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.020687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.020716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.021227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.021826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.021855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.022355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.022885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.022914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.057 qpair failed and we were unable to recover it. 00:31:22.057 [2024-06-09 23:13:50.023615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.057 [2024-06-09 23:13:50.024158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.024169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.024614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.025136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.025148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.025760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.026224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.026235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.026862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.027127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.027142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.027771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.028004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.028015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.028509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.028974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.028982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.029495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.030007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.030017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.030507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.031012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.031020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.031498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.032011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.032018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.032523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.032987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.032995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.033492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.034100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.034109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.034492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.035010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.035018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.035524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.036040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.036048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.036565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.037038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.037046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.037553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.038030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.038038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.038529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.039054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.039061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.039534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.040048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.040056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.040560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.041034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.041041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.041529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.042002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.042010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.042501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.042973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.042981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.043333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.043810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.043818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.044327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.044766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.044774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.045262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.045861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.045890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.046609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.047121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.047131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.047735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.048284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.048295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.048809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.049290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.049298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.049661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.050180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.050188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.050807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.051305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.051315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.051800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.052315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.052323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.058 qpair failed and we were unable to recover it. 00:31:22.058 [2024-06-09 23:13:50.052873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.053240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.058 [2024-06-09 23:13:50.053252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.053869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.054424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.054443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.054867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.055390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.055398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.055917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.056605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.056634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.056992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.057472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.057481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.057987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.058500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.058508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.059000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.059478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.059486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.059963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.060521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.060529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.061030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.061386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.061394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.061874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.062393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.062404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.062740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.063285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.063297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.063797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.064314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.064322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.064776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.065291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.065298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.065809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.066281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.066289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.066797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.067313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.067320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.067893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.068408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.068419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.068888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.069210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.069219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.069811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.070366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.070377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.070906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.071332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.071343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.071919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.072310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.072322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.072901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.073563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.073593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.074091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.074667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.074696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.075191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.075798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.075827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.076372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.076916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.076945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.077605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.078117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.078127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.078716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.079216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.079227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.079800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.080344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.080355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.080942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.081599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.081628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.082141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.082750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.082779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.059 qpair failed and we were unable to recover it. 00:31:22.059 [2024-06-09 23:13:50.083141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.083739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.059 [2024-06-09 23:13:50.083768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.084270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.084853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.084882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.085415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.085870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.085899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.086412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.086993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.087022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.087645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.088193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.088204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.088799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.089298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.089309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.089895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.090605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.090634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.091144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.091718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.091747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.092243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.092878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.092907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.093606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.094154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.094164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.094771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.095281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.095292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.095769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.096291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.096298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.096806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.097323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.097331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.097909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.098424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.098443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.098942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.099411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.099420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.099913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.100376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.100384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.100951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.101593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.101622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.101988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.102222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.102234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.102766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.103281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.103293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.103781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.104302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.104310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.104823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.108105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.108124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.108651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.109191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.109201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.109444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.109710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.109724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.110064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.110578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.110587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.060 qpair failed and we were unable to recover it. 00:31:22.060 [2024-06-09 23:13:50.111109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.060 [2024-06-09 23:13:50.111638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.111647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.112150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.112680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.112688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.113197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.113711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.113718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.114234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.114844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.114873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.115370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.115970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.116002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.116306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.116725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.116734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.117267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.117882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.117911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.118425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.118923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.118932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.119564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.119990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.120000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.120537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.121057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.121066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.121428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.121938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.121945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.122456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.122977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.122985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.123564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.124075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.124083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.124572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.125092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.125100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.125608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.126084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.126096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.126701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.127131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.127143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.127609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.128128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.128136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.128742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.129291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.129302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.129800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.130325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.130333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.130572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.131103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.131111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.131726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.132260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.132271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.132864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.133365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.133375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.133863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.134302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.134314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.134813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.135296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.135304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.135901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.136408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.136423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.136951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.137602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.137631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.138111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.138678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.138707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.139228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.139825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.139854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.140329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.140912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.140941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.061 qpair failed and we were unable to recover it. 00:31:22.061 [2024-06-09 23:13:50.141613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.142160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.061 [2024-06-09 23:13:50.142170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.142794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.143219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.143230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.143837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.144283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.144294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.144694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.145239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.145251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.145735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.146281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.146291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.146805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.147290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.147298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.147800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.148321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.148329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.148914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.149609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.149639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.150117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.150356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.150369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.150605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.151054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.151062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.151587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.152104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.152112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.152622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.153099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.153107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.153761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.154308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.154319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.154809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.155331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.155339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.155835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.156319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.156327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.156894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.157260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.157271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.157860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.158424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.158442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.158927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.159616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.159645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.160144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.160718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.160747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.161261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.161745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.161774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.162294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.162751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.162760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.163269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.163857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.163886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.164387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.165003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.165033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.165654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.166196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.166206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.166773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.167162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.167172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.167785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.168286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.168296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.168690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.169194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.169204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.169832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.170380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.170391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.171003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.171643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.171672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.062 qpair failed and we were unable to recover it. 00:31:22.062 [2024-06-09 23:13:50.172183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.172794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.062 [2024-06-09 23:13:50.172824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.173329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.173894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.173923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.174407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.175024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.175053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.175627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.176136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.176146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.176623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.177171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.177182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.177766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.178316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.178326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.178903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.179635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.179664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.180182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.180654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.180683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.181233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.181778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.181806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.182306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.182896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.182926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.183313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.183798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.183806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.184120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.184614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.184642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.185164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.185701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.185730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.186134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.186628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.186656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.187146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.187786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.187816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.188057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.188534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.188543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.188955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.189440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.189448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.189927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.190409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.190417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.190801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.191173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.191181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.191779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.192308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.192318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.192808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.193331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.193339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.193799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.194128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.194135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.194630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.195172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.195182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.195756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.196276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.196287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.196781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.197306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.197314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.197804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.198323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.198331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.198692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.199247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.199257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.199846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.200365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.200375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.200964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.201371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.201382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.063 [2024-06-09 23:13:50.202010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.202599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.063 [2024-06-09 23:13:50.202628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.063 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.203135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.203651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.203680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.204160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.204781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.204809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.205327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.205906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.205935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.206620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.207172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.207183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.207718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.207977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.207991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.208514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.208835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.208844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.209336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.209701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.209709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.210195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.210645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.210652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.211171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.211810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.211838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.212247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.212853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.212882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.213335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.213861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.213891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.214376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.214729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.214758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.215270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.215866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.215895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.216608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.217160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.217171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.217762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.218308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.218318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.218943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.219604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.219633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.220159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.220747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.220776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.221273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.221888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.221917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.222626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.223016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.223026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.223661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.224178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.224189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.224713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.225224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.225234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.225839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.226359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.226369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.064 [2024-06-09 23:13:50.226884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.227621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.064 [2024-06-09 23:13:50.227650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.064 qpair failed and we were unable to recover it. 00:31:22.332 [2024-06-09 23:13:50.228198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.332 [2024-06-09 23:13:50.228760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.332 [2024-06-09 23:13:50.228789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.332 qpair failed and we were unable to recover it. 00:31:22.332 [2024-06-09 23:13:50.229297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.229617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.229627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.230128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.230680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.230709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.231223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.231832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.231861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.232378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.232941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.232971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.233351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.233956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.233985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.234652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.235064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.235074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.235413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.235905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.235912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.236224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.236580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.236589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.236979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.237337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.237344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.237862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.238328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.238335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.238701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.239210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.239217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.239803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.240342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.240354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.240768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.241277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.241288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.241701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.242206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.242215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.242711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.243102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.243112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.243708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.244256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.244266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.244857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.245387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.245397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.245936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.246643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.246672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.246901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.247257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.247265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.247854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.248396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.248412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.249104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.249637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.249666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.250191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.250796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.250824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.251344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.251904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.251933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.252632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.253184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.253194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.253678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.254219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.254229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.254731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.255148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.255158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.255758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.256297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.256308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.256505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.256656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.256664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.333 [2024-06-09 23:13:50.257138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.257554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.333 [2024-06-09 23:13:50.257562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.333 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.258029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.258556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.258564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.259023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.259525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.259533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.260025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.260421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.260429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.260892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.261371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.261380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.261863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.262337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.262349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.262842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.263137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.263145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.263730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.264246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.264256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.264889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.265267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.265277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.265867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.266375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.266385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.267007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.267639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.267668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.268148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.268706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.268735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.269251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.269856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.269886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.270587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.271129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.271140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.271767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.272317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.272327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.272925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.273598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.273630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.274148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.274759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.274788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.275268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.275828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.275857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.276386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.277042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.277071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.277677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.278204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.278214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.278842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.279352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.279362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.279969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.280648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.280677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.281176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.281773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.281802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.282313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.282892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.282921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.283324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.283804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.283812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.284047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.284663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.284695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.285057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.285410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.285418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.285877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.286360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.286368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.286921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.287217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.287227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.334 [2024-06-09 23:13:50.287826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.288334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.334 [2024-06-09 23:13:50.288345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.334 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.288887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.289416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.289428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.289955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.290609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.290638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.291137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.291715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.291744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.292232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.292813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.292842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.293328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.293803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.293832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.294069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.294479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.294491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.294732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.295232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.295240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.295639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.296124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.296132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.296622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.297138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.297146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.297637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.298119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.298126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.298651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.299173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.299181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.299664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.300040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.300051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.300535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.301048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.301055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.301548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.302048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.302056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.302562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.302978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.302986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.303505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.304024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.304032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.304546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.305026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.305033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.305388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.305903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.305911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.306291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.306576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.306584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.307072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.307595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.307602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.308128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.308743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.308771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.309164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.309783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.309812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.310047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.310665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.310694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.311263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.311906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.311935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.312622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.313165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.313175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.313661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.314180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.314192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.314811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.315195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.315206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.315814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.316357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.316368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.335 [2024-06-09 23:13:50.316718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.317233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.335 [2024-06-09 23:13:50.317244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.335 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.317839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.318340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.318350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.318937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.319594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.319623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.320116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.320697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.320727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.321238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.321660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.321689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.321936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.322450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.322460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.322979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.323457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.323465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.323984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.324455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.324463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.324988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.325507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.325515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.326006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.326484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.326491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.326985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.327230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.327239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.327719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.328238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.328246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.328824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.329204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.329215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.329807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.330354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.330364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.330957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.331616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.331646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.332162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.332784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.332813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.333336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.333843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.333872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.334363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.334897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.334926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.335284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.335800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.335809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.336323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.336824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.336832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.337318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.337868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.337897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.338265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.338856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.338884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.339411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.339996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.340025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.340642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.341112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.341123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.341761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.342278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.342289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.342810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.343107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.343115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.343716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.344226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.344237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.344885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.345387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.345398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.345976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.346365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.346376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.346976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.347600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.336 [2024-06-09 23:13:50.347629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.336 qpair failed and we were unable to recover it. 00:31:22.336 [2024-06-09 23:13:50.348118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.348837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.348854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.349360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.349931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.349960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.350642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.351189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.351199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.351790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.352333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.352344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.352921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.353621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.353650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.354139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.354342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.354355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.354861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.355097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.355109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.355699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.356229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.356239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.356852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.357331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.357342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.358020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.358676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.358705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.359220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.359738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.359766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.360168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.360779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.360808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.361322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.361846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.361875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.362391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.362980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.363009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.363400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.363969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.363998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.364603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.365160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.365170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.365742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.366295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.366305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.366629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.367146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.367156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.367832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.368259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.368269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.368749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.369260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.369270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.369765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.370346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.370357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.370930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.371624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.371652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.372175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.372744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.372773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.337 qpair failed and we were unable to recover it. 00:31:22.337 [2024-06-09 23:13:50.373277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.337 [2024-06-09 23:13:50.373759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.373788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.374267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.374901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.374931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.375411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.375966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.375995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.376634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.377165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.377176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.377728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.378276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.378286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.378873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.379385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.379395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.380000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.380654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.380683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.381101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.381711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.381740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.382218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.382817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.382846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.383356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.383934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.383963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.384603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.385108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.385118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.385682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.386230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.386240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.386836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.387333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.387345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.387939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.388600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.388629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.388877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.389360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.389369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.389937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.390424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.390440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.390960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.391372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.391379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.391753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.392311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.392319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.392865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.393152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.393160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.393772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.394154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.394165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.394391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.394881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.394890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.395400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.395975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.396004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.396604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.396986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.396997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.397597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.398108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.398119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.398708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.399265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.399275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.399681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.400089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.400101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.400620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.401167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.401177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.401676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.402185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.402196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.402772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.403278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.403288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.338 [2024-06-09 23:13:50.403789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.404323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.338 [2024-06-09 23:13:50.404331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.338 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.404872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.405351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.405360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 110417 Killed "${NVMF_APP[@]}" "$@" 00:31:22.339 [2024-06-09 23:13:50.405958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 23:13:50 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:31:22.339 [2024-06-09 23:13:50.406606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.406634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 23:13:50 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.406884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 23:13:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:22.339 23:13:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:22.339 [2024-06-09 23:13:50.407373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.407382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 23:13:50 -- common/autotest_common.sh@10 -- # set +x 00:31:22.339 [2024-06-09 23:13:50.407743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.408220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.408228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.408825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.409378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.409388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.409997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.410598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.410627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.411016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.411605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.411634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.412125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.412700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.412729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.413223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.413827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.413856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 23:13:50 -- nvmf/common.sh@469 -- # nvmfpid=111320 00:31:22.339 [2024-06-09 23:13:50.414382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 23:13:50 -- nvmf/common.sh@470 -- # waitforlisten 111320 00:31:22.339 23:13:50 -- common/autotest_common.sh@819 -- # '[' -z 111320 ']' 00:31:22.339 23:13:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:22.339 [2024-06-09 23:13:50.414877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.414906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 23:13:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.339 23:13:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:22.339 [2024-06-09 23:13:50.415411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 23:13:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.339 23:13:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:22.339 [2024-06-09 23:13:50.415904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.415913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 23:13:50 -- common/autotest_common.sh@10 -- # set +x 00:31:22.339 [2024-06-09 23:13:50.416409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.417018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.417048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.417695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.418099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.418110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.418745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.419123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.419134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.419743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.420252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.420263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.420850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.421424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.421443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.421942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.422314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.422324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.422803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.423277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.423285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.423874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.424611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.424640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.425110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.425630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.425660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.426024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.426548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.426556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.427066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.427688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.427717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.428223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.428835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.428864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.429387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.429986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.430015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.430624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.431178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.431188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.339 qpair failed and we were unable to recover it. 00:31:22.339 [2024-06-09 23:13:50.431798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.339 [2024-06-09 23:13:50.432307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.432317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.432829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.433337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.433348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.433762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.434293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.434300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.434885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.435323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.435331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.435685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.436056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.436069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.436575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.436937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.436945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.437447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.437969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.437977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.438188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.438587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.438595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.439148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.439616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.439624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.440066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.440582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.440590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.441156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.441678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.441707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.442206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.442823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.442852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.443087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.443689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.443719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.444232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.444759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.444788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.445281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.445677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.445706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.446219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.446830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.446860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.447259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.447876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.447905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.448397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.449029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.449058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.449698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.450226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.450237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.450863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.451424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.451443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.451954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.452321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.452330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.452841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.453371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.453379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.453946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.454638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.454667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.455089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.455683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.455712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.456241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.456864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.456893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.457406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.457667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.457696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.458204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.458840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.458869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.459400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.460022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.460054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.460686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.461107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.461118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.340 qpair failed and we were unable to recover it. 00:31:22.340 [2024-06-09 23:13:50.461728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.340 [2024-06-09 23:13:50.462036] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:22.341 [2024-06-09 23:13:50.462081] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.341 [2024-06-09 23:13:50.462287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.462297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.462815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.463357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.463366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.463958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.464231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.464248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.464714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.465065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.465077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.465708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.466228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.466239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.466864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.467625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.467654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.468168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.468625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.468655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.469167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.469832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.469866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.470378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.470773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.470803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.471338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.471828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.471836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.472231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.472848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.472878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.473113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.473633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.473663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.474172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.474793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.474823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.475312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.475813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.475823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.476313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.476944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.476974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.477629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.478171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.478182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.478678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.479070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.479082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.479328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.479823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.479832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.480316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.480785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.480793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.481316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.481805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.481814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.482308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.482893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.482921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.483409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.483943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.483952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.484614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.485165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.485175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.485783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.486285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.486296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.486900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.487144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.487154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.487673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.488223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.488234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.341 qpair failed and we were unable to recover it. 00:31:22.341 [2024-06-09 23:13:50.488855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.341 [2024-06-09 23:13:50.489249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.489259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.489972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.490287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.490297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.490631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.490931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.490942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.491265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.491761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.491770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.492045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.492269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.492282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.492458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.492924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.492933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.342 [2024-06-09 23:13:50.493202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.493698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.493706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.494267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.494678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.494707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.495197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.495699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.495728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.495958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.496471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.496482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.496856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.497385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.497392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.497948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.498615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.498645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.499171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.499835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.499864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.500326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.500795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.500804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.501300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.501678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.501707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.502089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.502611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.502619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.503110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.503724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.503754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.342 [2024-06-09 23:13:50.504129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.504723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.342 [2024-06-09 23:13:50.504752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.342 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-09 23:13:50.505264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-09 23:13:50.505867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-09 23:13:50.505896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-09 23:13:50.506381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-09 23:13:50.506983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-09 23:13:50.507012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-09 23:13:50.507653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-09 23:13:50.508211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-09 23:13:50.508222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-09 23:13:50.508842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-09 23:13:50.509359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-09 23:13:50.509369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-09 23:13:50.509978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-09 23:13:50.510606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-09 23:13:50.510635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.610 qpair failed and we were unable to recover it. 00:31:22.610 [2024-06-09 23:13:50.511120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.610 [2024-06-09 23:13:50.511634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.511663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.512193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.512818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.512846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.513384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.513972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.514001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.514640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.515197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.515208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.515819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.516329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.516339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.516929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.517611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.517641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.518133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.518379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.518393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.518760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.519121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.519129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.519732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.520288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.520299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.520661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.521137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.521144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.521421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.521924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.521932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.522432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.522895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.522903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.523427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.523899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.523907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.524358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.524880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.524888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.525405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.525914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.525922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.526422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.526918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.526926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.527447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.527922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.527930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.528443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.528853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.528860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.529095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.529585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.529594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.529825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.530339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.530347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.530847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.531328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.531335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.531749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.532111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.532119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.532632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.533151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.533159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.533424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.533927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.533938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.534457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.534985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.534994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.535509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.536044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.536051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.536411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.536891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.536899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.537254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.537745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.537752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.538168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.538788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.538817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.611 qpair failed and we were unable to recover it. 00:31:22.611 [2024-06-09 23:13:50.539293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.611 [2024-06-09 23:13:50.539826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.539834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.540322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.540816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.540825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.541318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.541903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.541932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.542603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.542979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:22.612 [2024-06-09 23:13:50.543163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.543172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.543807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.544321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.544332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.544929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.545648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.545677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.546192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.546833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.546862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.547165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.547618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.547647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.548204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.548800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.548829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.549334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.549939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.549968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.550597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.551101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.551112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.551732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.552239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.552249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.552849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.552998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.553009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.553480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.554010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.554018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.554270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.554798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.554806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.555345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.555845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.555873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.556185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.556742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.556771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.557297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.557620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.557630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.558129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.558386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.558394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.558918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.559568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.559598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.560129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.560736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.560765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.561141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.561769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.561799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.562148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.562758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.562787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.563196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.563771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.563802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.564285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.564810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.564818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.565334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.565893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.565922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.566614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.567165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.567176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.567425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.567949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.567957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.568616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.568881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.612 [2024-06-09 23:13:50.568896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.612 qpair failed and we were unable to recover it. 00:31:22.612 [2024-06-09 23:13:50.569219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.569727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.569736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.570238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.570667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.570696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.571222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.571810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.571839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.572354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.572902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.572932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.573615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.574167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.574177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.574796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.575302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.575312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.575924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.576640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.576669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.576903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.577272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.577281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.577761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.578293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.578301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.578817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.579349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.579356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.579953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.580620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.580649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.581156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.581777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.581807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.582325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.582900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.582929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.583612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.584004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.584014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.584506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.584986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.584994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.585514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.586023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.586031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.586495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.587008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.587015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.587510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.588023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.588031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.588275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.588768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.588775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.589173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.589783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.589812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.590289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.590786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.590794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.591285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.591872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.591901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.592600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.593122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.593132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.593424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.594042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.594051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.594261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.594714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.594744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.595264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.595466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.595479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.595973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.596599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.596629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.597054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.597641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.597670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.598140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.598645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.613 [2024-06-09 23:13:50.598674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.613 qpair failed and we were unable to recover it. 00:31:22.613 [2024-06-09 23:13:50.599179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.599758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.599786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.600302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.600749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.600758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.601120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.601692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.601721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.601968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.602456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.602465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.602961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.603492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.603501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.603987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.604467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.604477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.605001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.605520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.605528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.605991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.605990] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:22.614 [2024-06-09 23:13:50.606108] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.614 [2024-06-09 23:13:50.606118] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.614 [2024-06-09 23:13:50.606127] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.614 [2024-06-09 23:13:50.606170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:22.614 [2024-06-09 23:13:50.606327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:22.614 [2024-06-09 23:13:50.606513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.606520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.606446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:22.614 [2024-06-09 23:13:50.606650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:22.614 [2024-06-09 23:13:50.607153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.607670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.607678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.608191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.608674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.608703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.609104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.609684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.609714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.610258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.610835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.610864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.611371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.611849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.611878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.612611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.613166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.613176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.613667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.613894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.613909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.614452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.614866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.614874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.615379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.615900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.615909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.616431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.616889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.616897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.617416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.617904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.617911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.618423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.618902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.618910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.619396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.619671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.619679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.620157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.620682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.620690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.621173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.621792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.621821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.622343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.622720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.622729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.623092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.623620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.623649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.624134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.624603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.624632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.625152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.625780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.614 [2024-06-09 23:13:50.625811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.614 qpair failed and we were unable to recover it. 00:31:22.614 [2024-06-09 23:13:50.626331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.626842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.626851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.627207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.627782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.627811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.628213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.628795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.628825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.629348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.629770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.629800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.630319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.630587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.630596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.631089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.631702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.631731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.632101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.632379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.632386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.632933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.633273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.633280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.633856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.634124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.634139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.634634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.635013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.635021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.635382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.635820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.635829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.636340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.636820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.636828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.637349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.637835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.637843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.638335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.638837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.638866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.639142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.639727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.639757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.640259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.640878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.640907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.641387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.642009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.642038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.642649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.643205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.643215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.643712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.643951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.643961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.644466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.644839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.644847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.645194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.645709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.645717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.645948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.646171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.646184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.646665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.647023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.647032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.647271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.647803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.647812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.615 qpair failed and we were unable to recover it. 00:31:22.615 [2024-06-09 23:13:50.648057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.615 [2024-06-09 23:13:50.648585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.648593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.649095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.649610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.649618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.649979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.650366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.650373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.650893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.651369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.651376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.651721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.652248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.652258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.652856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.653369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.653379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.653984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.654260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.654270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.654696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.655198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.655210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.655412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.656045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.656073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.656580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.657125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.657135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.657750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.658304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.658314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.658913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.659611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.659640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.660161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.660744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.660773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.661278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.661842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.661872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.662390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.662985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.663014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.663631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.664150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.664161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.664378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.664973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.665002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.665643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.666147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.666158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.666797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.667311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.667322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.667904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.668626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.668659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.669192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.669734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.669763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.670264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.670651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.670679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.671204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.671811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.671840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.672352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.672952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.672981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.673251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.673833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.673862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.674133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.674724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.674754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.675245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.675866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.675896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.676614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.677166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.677178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.677794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.678302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.678312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.616 qpair failed and we were unable to recover it. 00:31:22.616 [2024-06-09 23:13:50.678899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.616 [2024-06-09 23:13:50.679411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.679426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.679805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.680191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.680199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.680669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.681073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.681084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.681320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.681661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.681670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.682134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.682639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.682667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.683152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.683741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.683770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.684249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.684865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.684893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.685416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.685555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.685562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.685934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.686297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.686304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.686557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.686802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.686809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.687322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.687821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.687832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.688072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.688562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.688570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.689064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.689274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.689281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.689776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.690187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.690195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.690791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.691186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.691198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.691775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.692166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.692177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.692766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.693269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.693280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.693803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.694341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.694349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.694853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.695294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.695302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.695896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.696410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.696422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.696795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.697269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.697279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.697879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.698244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.698254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.698657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.699017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.699027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.699599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.700109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.700119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.700356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.700771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.700780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.701275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.701845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.701874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.702353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.702820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.702849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.703078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.703574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.703584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.703947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.704469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.704478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.617 qpair failed and we were unable to recover it. 00:31:22.617 [2024-06-09 23:13:50.704975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.705498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.617 [2024-06-09 23:13:50.705506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.706007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.706485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.706493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.706987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.707481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.707489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.707993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.708473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.708482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.708607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.708838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.708846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.709183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.709551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.709560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.710073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.710462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.710470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.711009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.711488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.711496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.711718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.711928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.711936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.712446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.712837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.712845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.713119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.713636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.713643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.714139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.714657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.714665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.715020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.715541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.715550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.715902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.716375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.716382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.716622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.717112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.717120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.717481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.717961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.717969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.718485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.718997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.719004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.719244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.719599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.719607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.720081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.720609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.720617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.721111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.721725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.721755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.722025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.722299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.722307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.722825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.723269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.723276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.723860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.724363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.724374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.724755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.725309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.725320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.725432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.725901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.725909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.726278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.726658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.726686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.727214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.727847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.727876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.728619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.729128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.729138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.729753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.730295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.730306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.618 qpair failed and we were unable to recover it. 00:31:22.618 [2024-06-09 23:13:50.730794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.618 [2024-06-09 23:13:50.731161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.731170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.731782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.732171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.732181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.732726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.733230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.733240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.733852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.734155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.734165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.734747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.735263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.735274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.735881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.736382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.736392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.736745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.737072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.737084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.737696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.737972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.737983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.738508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.739022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.739030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.739507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.739982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.739990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.740344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.740755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.740763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.741277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.741883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.741912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.742193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.742801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.742829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.743352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.743822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.743852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.744210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.744864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.744893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.745412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.745724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.745753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.745992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.746242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.746249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.746737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.747220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.747228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.747749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.748300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.748311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.748826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.749353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.749361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.749950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.750254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.750266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.750850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.751400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.751415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.751766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.751925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.751935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.752431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.752924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.752931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.753454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.753935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.753943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.619 qpair failed and we were unable to recover it. 00:31:22.619 [2024-06-09 23:13:50.754460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.619 [2024-06-09 23:13:50.754944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.754951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.755447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.755926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.755934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.756410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.756916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.756923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.757276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.757795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.757803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.758313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.758554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.758568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.759071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.759433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.759442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.759687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.759886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.759893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.760365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.760785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.760793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.761181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.761648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.761657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.762148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.762671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.762679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.763154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.763731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.763760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.764277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.764895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.764925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.765411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.766012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.766041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.766647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.767024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.767034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.767648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.768155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.768165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.768647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.769154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.769165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.769280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.769767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.769776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.770180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.770760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.770788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.771317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.771592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.771601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.772087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.772702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.772731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.773211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.773724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.773753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.774022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.774511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.774519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.775039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.775527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.775535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.775946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.776194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.776201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.776671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.777150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.777158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.777746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.778146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.778157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.778544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.778983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.778991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.779503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.780012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.780020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.780364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.780845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.780853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.620 qpair failed and we were unable to recover it. 00:31:22.620 [2024-06-09 23:13:50.781343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.620 [2024-06-09 23:13:50.781824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.621 [2024-06-09 23:13:50.781833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.621 qpair failed and we were unable to recover it. 00:31:22.888 [2024-06-09 23:13:50.782379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.888 [2024-06-09 23:13:50.782915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.888 [2024-06-09 23:13:50.782944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.888 qpair failed and we were unable to recover it. 00:31:22.888 [2024-06-09 23:13:50.783612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.888 [2024-06-09 23:13:50.784031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.784041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.784663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.785196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.785207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.785592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.786133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.786144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.786727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.787233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.787243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.787864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.788252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.788262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.788885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.789398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.789427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.789794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.790324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.790332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.790918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.791314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.791324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.791907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.792619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.792649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.793130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.793695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.793725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.794287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.794522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.794530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.795024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.795514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.795522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.796048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.796575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.796582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.796951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.797468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.797476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.797965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.798489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.798497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.799017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.799258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.799272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.799808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.800328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.800336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.800861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.801288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.801296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.801897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.802410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.802422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.802922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.803608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.803638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.804157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.804783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.804812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.805287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.805883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.805912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.806179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.806645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.806674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.807189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.807808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.807837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.808353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.808951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.808979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.809595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.810112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.810122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.810720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.811277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.811289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.811798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.812082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.812091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.812571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.813098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.889 [2024-06-09 23:13:50.813105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.889 qpair failed and we were unable to recover it. 00:31:22.889 [2024-06-09 23:13:50.813713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.814218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.814229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.814819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.815326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.815336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.815848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.816335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.816343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.816926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.817614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.817643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.818167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.818750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.818779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.819049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.819153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.819160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.819654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.819907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.819915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.820324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.820833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.820840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.821350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.821868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.821879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.822112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.822367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.822377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.822872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.823395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.823408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.823910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.824396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.824408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.824871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.825298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.825308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.825896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.826198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.826209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.826796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.827346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.827356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.827944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.828621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.828650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.829137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.829342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.829349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.829907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.830385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.830393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.831029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.831643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.831675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.832032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.832244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.832257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.832718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.833273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.833283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.833645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.834135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.834143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.834745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.835299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.835311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.835823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.836324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.836332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.836596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.837116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.837124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.837393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.837900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.837909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.838215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.838759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.838788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.839114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.839673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.839702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.840076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.840690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.890 [2024-06-09 23:13:50.840722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.890 qpair failed and we were unable to recover it. 00:31:22.890 [2024-06-09 23:13:50.841218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.841598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.841627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.842192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.842772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.842801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.843268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.843802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.843832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.844346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.844935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.844964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.845316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.845777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.845806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.846322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.846810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.846819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.847336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.847907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.847936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.848175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.848751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.848780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.849288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.849572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.849581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.850063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.850544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.850557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.851074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.851596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.851604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.852115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.852640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.852669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.852909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.853426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.853435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.853954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.854324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.854331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.854833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.855311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.855319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.855808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.856330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.856338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.856696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.857168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.857176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.857307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.857777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.857786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.858297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.858545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.858552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.859066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.859467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.859475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.859970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.860493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.860501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.860974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.861499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.861507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.862027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.862423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.862431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.862663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.863147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.863156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.863660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.864136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.864145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.864656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.865183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.865192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.865691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.866241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.866251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.866864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.867090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.867105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.867675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.868177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.891 [2024-06-09 23:13:50.868187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.891 qpair failed and we were unable to recover it. 00:31:22.891 [2024-06-09 23:13:50.868675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.869064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.869075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.869601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.870089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.870097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.870707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.871098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.871108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.871610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.872140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.872148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.872721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.873274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.873284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.873561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.874091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.874098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.874373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.874884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.874892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.875167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.875752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.875781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.876309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.876804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.876812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.877317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.877811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.877820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.878307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.878849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.878877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.879380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.879986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.880016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.880642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.881195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.881205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.881828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.882230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.882240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.882828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.883381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.883392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.883999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.884231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.884246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.884711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.885104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.885115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.885725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.885999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.886009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.886532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.886926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.886933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.887429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.887920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.887928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.888446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.888974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.888981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.889466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.889963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.889971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.890518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.890940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.890947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.891440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.891804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.891812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.892325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.892807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.892814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.893333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.893858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.893865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.894135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.894672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.894681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.894925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.895425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.895433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.892 [2024-06-09 23:13:50.895918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.896288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.892 [2024-06-09 23:13:50.896296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.892 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.896807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.897330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.897338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.897718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.898239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.898248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.898631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.899030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.899040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.899261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.899763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.899772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.900297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.900796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.900804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.901305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.901688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.901697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.902175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.902646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.902676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.903147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.903776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.903805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.904291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.904762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.904771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.905294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.905545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.905553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.906041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.906563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.906571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.907082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.907605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.907612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.908131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.908746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.908775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.909292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.909801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.909809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.910312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.910800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.910808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.911058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.911646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.911676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.912076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.912567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.912575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.913093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.913617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.913626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.914126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.914742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.914770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.915250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.915864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.915893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.916621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.917121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.917132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.917598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.917869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.917879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.918242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.918766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.918774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.893 qpair failed and we were unable to recover it. 00:31:22.893 [2024-06-09 23:13:50.919136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.893 [2024-06-09 23:13:50.919639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.919669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.920151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.920729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.920758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.921273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.921880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.921909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.922613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.922918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.922929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.923426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.923890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.923898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.924409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.924936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.924944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.925618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.926176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.926187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.926788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.927297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.927307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.927810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.928290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.928299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.928808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.929341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.929349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.929716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.930268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.930279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.930665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.930942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.930950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.931186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.931413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.931425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.931661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.932006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.932014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.932275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.932376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.932385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.932873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.933350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.933358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.933538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.933888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.933896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.934295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.934534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.934546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.934887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.935274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.935283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.935514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.935817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.935825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.936336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.936823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.936831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.937339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.937573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.937580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.938062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.938541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.938548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.939118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.939592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.939601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.939875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.940395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.940407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.940911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.941609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.941638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.942158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.942779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.942809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.943309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.943587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.943595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.944105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.944718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.944747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.894 qpair failed and we were unable to recover it. 00:31:22.894 [2024-06-09 23:13:50.945103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.945546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.894 [2024-06-09 23:13:50.945554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.946065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.946578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.946586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.947079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.947657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.947686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.948176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.948840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.948869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.949398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.949752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.949780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.950009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.950383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.950391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.950902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.951105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.951112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.951599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.952119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.952127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.952601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.953005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.953016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.953483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.953763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.953771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.954267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.954505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.954513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.954761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.955034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.955041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.955395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.955802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.955810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.956335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.956700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.956708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.957102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.957576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.957583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.957944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.958468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.958476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.958950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.959475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.959483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.959982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.960333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.960341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.960827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.961300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.961308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.961803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.962326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.962334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.962806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.963331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.963340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.963832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.964205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.964214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.964701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.964965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.964981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.965223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.965710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.965719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.966243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.966807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.966815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.967309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.967898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.967926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.968613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.969131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.969141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.969601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.970105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.970115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.970740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.970962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.970973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.895 qpair failed and we were unable to recover it. 00:31:22.895 [2024-06-09 23:13:50.971332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.971519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.895 [2024-06-09 23:13:50.971527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.971753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.972252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.972264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.972765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.973245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.973253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.973835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.974339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.974349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.974865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.975389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.975397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.975998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.976600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.976629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.977145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.977790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.977819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.978336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.978917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.978946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.979188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.979811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.979840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.980383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.980998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.981027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.981265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.981836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.981865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.981988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.982236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.982249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.982483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.982953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.982961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.983197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.983326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.983335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.983815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.984063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.984070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.984580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.985104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.985112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.985504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.986010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.986018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.986510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.987041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.987049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.987560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.987832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.987840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.988100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.988622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.988630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.989145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.989638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.989645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.990137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.990657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.990666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.991054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.991706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.991735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.992253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.992611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.992639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.993122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.993610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.993638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.993870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.994243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.994251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.994608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.995075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.995084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.995597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.995875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.995882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.996395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.996859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.996867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.896 [2024-06-09 23:13:50.997359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.997875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.896 [2024-06-09 23:13:50.997883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.896 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:50.998413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:50.998818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:50.998847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:50.999380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:50.999875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:50.999907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.000178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.000761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.000790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.001293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.001685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.001694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.002211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.002689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.002717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.003091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.003703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.003732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.004000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.004531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.004540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.005031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.005398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.005411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.005770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.006135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.006143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.006748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.007298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.007308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.007833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.008362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.008370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.008960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.009618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.009646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.010181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.010797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.010826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.011348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.011943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.011972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.012588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.013122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.013132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.013721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.014276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.014287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.014802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.015192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.015201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.015437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.015860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.015869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.016117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.016546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.016554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.016949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.017427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.017435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.017927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.018203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.018210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.018451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.018951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.018959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.019469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.019599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.019606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.020106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.020634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.020642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.021158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.021678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.021685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.022205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.022734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.022764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.897 qpair failed and we were unable to recover it. 00:31:22.897 [2024-06-09 23:13:51.023282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.023897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.897 [2024-06-09 23:13:51.023926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.024169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.024754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.024783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.025268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.025880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.025910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.026270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.026882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.026911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.027384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.027963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.027992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.028595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.029145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.029155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.029621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.029881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.029890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.030349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.030625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.030634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.031141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.031617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.031625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.032019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.032503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.032511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.033040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.033567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.033574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.034099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.034370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.034378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.034733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.035255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.035263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.035693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.035967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.035979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.036467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.037008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.037015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.037537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.038048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.038055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.038286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.038765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.038774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.039010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.039510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.039519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.040000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.040519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.040526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.041055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.041398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.041418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.041915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.042616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.042644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.043151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.043604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.043633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.043902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.044424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.044432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.044849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.045376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.045383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.045518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.045988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.045996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.046538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.047025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.047033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.047565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.047928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.047935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.898 [2024-06-09 23:13:51.048442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.048721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.898 [2024-06-09 23:13:51.048729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.898 qpair failed and we were unable to recover it. 00:31:22.899 [2024-06-09 23:13:51.049194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.049675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.049682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.899 qpair failed and we were unable to recover it. 00:31:22.899 [2024-06-09 23:13:51.050175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.050561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.050570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.899 qpair failed and we were unable to recover it. 00:31:22.899 [2024-06-09 23:13:51.050924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.051397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.051409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.899 qpair failed and we were unable to recover it. 00:31:22.899 [2024-06-09 23:13:51.051877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.052400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.052411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.899 qpair failed and we were unable to recover it. 00:31:22.899 [2024-06-09 23:13:51.053034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.053305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.053316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.899 qpair failed and we were unable to recover it. 00:31:22.899 [2024-06-09 23:13:51.053835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.054136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.054146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.899 qpair failed and we were unable to recover it. 00:31:22.899 [2024-06-09 23:13:51.054366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.054860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.054869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.899 qpair failed and we were unable to recover it. 00:31:22.899 [2024-06-09 23:13:51.055391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.055918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.055926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.899 qpair failed and we were unable to recover it. 00:31:22.899 [2024-06-09 23:13:51.056421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.056669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.056676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.899 qpair failed and we were unable to recover it. 00:31:22.899 [2024-06-09 23:13:51.056948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.057471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.057479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.899 qpair failed and we were unable to recover it. 00:31:22.899 [2024-06-09 23:13:51.057970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.058331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.058340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.899 qpair failed and we were unable to recover it. 00:31:22.899 [2024-06-09 23:13:51.058837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.059308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.059316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.899 qpair failed and we were unable to recover it. 00:31:22.899 [2024-06-09 23:13:51.059674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.060174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.899 [2024-06-09 23:13:51.060182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:22.899 qpair failed and we were unable to recover it. 00:31:22.899 [2024-06-09 23:13:51.060675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.060959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.060969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.061472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.061993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.062001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.062519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.062991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.062998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.063356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.063628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.063636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.063871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.064398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.064411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.064906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.065385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.065391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.065888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.066371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.066378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.066959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.067572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.067599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.067875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.068077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.068084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.068580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.068798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.068810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.069378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.069795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.069802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.070160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.070501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.070509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.070834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.071304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.071311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.071803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.072271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.072278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.072587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.072945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.072952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.073175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.073658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.073665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.074014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.074411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.074418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.074919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.075394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.075400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.075886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.076607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.076635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.077195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.077696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.077724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.078213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.078813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.078842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.079246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.079739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.079766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.080258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.080839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.080868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.081391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.081986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.082013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.082371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.082695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.082723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.168 qpair failed and we were unable to recover it. 00:31:23.168 [2024-06-09 23:13:51.083122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.168 [2024-06-09 23:13:51.083713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.083740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.083982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.084185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.084191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.084671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.084900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.084907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.085200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.085691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.085698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.086176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.086425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.086432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.086797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.086896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.086902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.087384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.087877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.087884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.088377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.088878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.088884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.089365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.089945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.089973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.090598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.091115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.091124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.091701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.092222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.092231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.092804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.093318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.093327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.093907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.094427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.094446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.094945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.095392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.095399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.095915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.096181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.096188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.096780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.097051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.097067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.097566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.097765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.097775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.097998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.098322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.098330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.098827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.099304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.099310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.099543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.100041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.100048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.100523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.100999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.101005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.101132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.101516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.101523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.101894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.102268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.102275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.102751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.103240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.103246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.103816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.104109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.104118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.104736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.105251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.105260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.105773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.106158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.106167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.106793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.107309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.107318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.107855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.108127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.108134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.169 qpair failed and we were unable to recover it. 00:31:23.169 [2024-06-09 23:13:51.108779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.169 [2024-06-09 23:13:51.109169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.109178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.109744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.110274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.110286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.110803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.111331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.111337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.111845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.112321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.112327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.112904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.113423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.113441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.113926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.114356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.114363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.114639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.115016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.115023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.115492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.116029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.116035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.116546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.117021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.117028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.117421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.117933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.117939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.118424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.118887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.118894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.119365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.119847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.119857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.120222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.120830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.120858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.121382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.122031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.122058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.122679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.123193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.123202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.123723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.124244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.124253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.124904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.125200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.125210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.125835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.126348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.126356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.126930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.127610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.127638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.128157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.128743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.128771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.129298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.129671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.129678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.129880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.130268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.130279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.130791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.131313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.131320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.131903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.132610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.132638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.133034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.133417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.133424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.133925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.134417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.134424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.134897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.135381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.135388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.135657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.136049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.136055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.136293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.136792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.136798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.170 qpair failed and we were unable to recover it. 00:31:23.170 [2024-06-09 23:13:51.137274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.137751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.170 [2024-06-09 23:13:51.137759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.138249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.138616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.138644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.139220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.139790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.139821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.140085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.140328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.140334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.140523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.140924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.140930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.141408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.141913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.141919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.142395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.143006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.143034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.143655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.144170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.144179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.144755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.145271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.145281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.145775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.146223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.146230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.146799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.147320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.147329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.147907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.148620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.148647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.149174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.149756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.149783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.150273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.150852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.150880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.151408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.152012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.152039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.152411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.152756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.152783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.153285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.153935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.153962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.154196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.154799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.154826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.155350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.155947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.155975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.156596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.157115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.157124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.157700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.158089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.158098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.158591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.159100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.159109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.159677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.159946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.159956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.160245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.160743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.160750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.161234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.161701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.161729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.162227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.162802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.162830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.163313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.163902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.163929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.164611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.165120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.165129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.165794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.166312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.166321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.171 [2024-06-09 23:13:51.166811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.167227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.171 [2024-06-09 23:13:51.167234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.171 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.167787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.168302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.168311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.168900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.169417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.169427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.169959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.170502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.170510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.170991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.171470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.171477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.171953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.172474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.172482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.172989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.173316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.173323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.173805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.174043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.174050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.174556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.174922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.174928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.175172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.175553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.175560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.175791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.176297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.176304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.176526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.177044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.177051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.177524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.177733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.177739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.178246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.178726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.178733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.179116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.179357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.179364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.179876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.180355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.180361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.180717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.181201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.181208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.181814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.182329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.182339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.182869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.183383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.183392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.183999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.184645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.184672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.185244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.185870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.185897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.186264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.186657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.186685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.187188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.187637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.187665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.188189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.188589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.188616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.172 [2024-06-09 23:13:51.189112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.189655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.172 [2024-06-09 23:13:51.189684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.172 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.190215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.190787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.190815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.191340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.191926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.191954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.192595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.193189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.193198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.193774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.194071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.194081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.194705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.194827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.194840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.195056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.195357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.195364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.195927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.196406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.196412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.196923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.197399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.197409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.197631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.198194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.198201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.198766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.199394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.199408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.199700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.200185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.200192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.200769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.201285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.201294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.201930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.202614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.202641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.203166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.203750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.203777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.204343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.204923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.204951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.205607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.206128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.206137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.206409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.207012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.207039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.207699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.208221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.208230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.208805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.209317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.209327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.209808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.210328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.210338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.210842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.211242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.211250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.211861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.212625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.212653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.213171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.213602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.213629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.213899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.214267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.214273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.214848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.215278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.215287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.215787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.216181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.216188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.216751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.217326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.217336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.217585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.218059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.218065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.218622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.219006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.173 [2024-06-09 23:13:51.219014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.173 qpair failed and we were unable to recover it. 00:31:23.173 [2024-06-09 23:13:51.219244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.219462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.219475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.219747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.220224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.220231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.220795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.221214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.221223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.221726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.222248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.222257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.222836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.223353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.223362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.223859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.224387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.224396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.225013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.225649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.225677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.226206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.226798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.226825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.227185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.227584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.227612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.228170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.228715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.228743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.229243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.229869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.229896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.230169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.230628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.230656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.231193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.231824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.231852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.232377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.232962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.232990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.233234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 23:13:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:23.174 23:13:51 -- common/autotest_common.sh@852 -- # return 0 00:31:23.174 [2024-06-09 23:13:51.233814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.233842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 23:13:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:23.174 23:13:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:23.174 [2024-06-09 23:13:51.234325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 23:13:51 -- common/autotest_common.sh@10 -- # set +x 00:31:23.174 [2024-06-09 23:13:51.234894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.234921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.235626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.236180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.236189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.236618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.237007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.237017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.237513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.237897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.237903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.238381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.238620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.238638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.239133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.239617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.239624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.239889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.240389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.240396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.240887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.241365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.241371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.241841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.242305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.242314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.242707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.243198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.243204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.243665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.244047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.244057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.244643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.244876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.244888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.245187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.245564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.245570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.174 qpair failed and we were unable to recover it. 00:31:23.174 [2024-06-09 23:13:51.246096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.174 [2024-06-09 23:13:51.246585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.246591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.247095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.247707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.247738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.248236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.248830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.248857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.249130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.249413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.249422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.249897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.250389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.250396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.250922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.251414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.251421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.251901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.252387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.252394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.253011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.253525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.253534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.254029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.254378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.254385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.254987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.255598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.255626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.255959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.256333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.256339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.256881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.257311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.257321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.257543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.258065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.258072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.258613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.259156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.259165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.259781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.260079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.260088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.260613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.261099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.261105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.261732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.262039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.262048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.262544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.263043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.263050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.263528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.264013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.264020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.264504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.265038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.265045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.265447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.265922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.265928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.266485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.266948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.266955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.267175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.267522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.267530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.267756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.268273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.268280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.268772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.269048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.269055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.269587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.269957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.269965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 23:13:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.175 [2024-06-09 23:13:51.270462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.270741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.270750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 23:13:51 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:23.175 23:13:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.175 [2024-06-09 23:13:51.271247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 23:13:51 -- common/autotest_common.sh@10 -- # set +x 00:31:23.175 [2024-06-09 23:13:51.271761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.271769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.175 qpair failed and we were unable to recover it. 00:31:23.175 [2024-06-09 23:13:51.272287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.272788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.175 [2024-06-09 23:13:51.272796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.273269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.273616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.273645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.273899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.274384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.274391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.274646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.275119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.275126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.275639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.276161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.276169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.276751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.276986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.276996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.277498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.277989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.277997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.278522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.279012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.279019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.279527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.280056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.280063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.280457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.280992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.281001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.281132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.281499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.281509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.282030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.282450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.282458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.282826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.283351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.283359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.283832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.284311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.284319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.284832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.285356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.285364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.285870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.286111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.286126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.286598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.287150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.287160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.287784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.288111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.288121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.288621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.288833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.288840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 Malloc0 00:31:23.176 [2024-06-09 23:13:51.289104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.289579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.289587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 23:13:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.176 23:13:51 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:23.176 [2024-06-09 23:13:51.290107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 23:13:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.176 23:13:51 -- common/autotest_common.sh@10 -- # set +x 00:31:23.176 [2024-06-09 23:13:51.290635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.290644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.291061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.291542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.291550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.292091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.292377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.292388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.292860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.293099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.293106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.293684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.293987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.293997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.294361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.294732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.294740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.295100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.295469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.295477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.295980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.296388] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.176 [2024-06-09 23:13:51.296506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.296513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.176 qpair failed and we were unable to recover it. 00:31:23.176 [2024-06-09 23:13:51.297009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.297376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.176 [2024-06-09 23:13:51.297383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.297882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.298361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.298369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.298958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.299604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.299632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.299908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.300422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.300431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.300962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.301464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.301472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.301996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.302516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.302525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.303011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.303534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.303542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.304057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.304578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.304586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.305074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 23:13:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.177 [2024-06-09 23:13:51.305601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.305610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 23:13:51 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:23.177 [2024-06-09 23:13:51.306091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 23:13:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.177 23:13:51 -- common/autotest_common.sh@10 -- # set +x 00:31:23.177 [2024-06-09 23:13:51.306720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.306748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.307299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.307510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.307518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.307893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.308139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.308148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.308648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.309175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.309183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.309697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.309893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.309903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.310406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.310764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.310772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.311246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.311702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.311731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.311960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.312454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.312464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.313002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.313534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.313542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.314027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.314503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.314511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.314996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.315475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.315484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.315963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.316443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.316451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.316940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.317221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.317228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 23:13:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.177 [2024-06-09 23:13:51.317742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 23:13:51 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:23.177 [2024-06-09 23:13:51.318000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.318008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 23:13:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.177 23:13:51 -- common/autotest_common.sh@10 -- # set +x 00:31:23.177 [2024-06-09 23:13:51.318491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.318996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.319004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.177 qpair failed and we were unable to recover it. 00:31:23.177 [2024-06-09 23:13:51.319279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.177 [2024-06-09 23:13:51.319796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.319803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.319983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.320358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.320366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.320711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.321231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.321239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.321358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.321875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.321884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.322408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.323011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.323040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.323280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.323658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.323687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.324191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.324743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.324771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.325251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.325754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.325783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.326333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.326853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.326883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.327411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.327893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.327922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.328666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.329203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.329214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 23:13:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.178 23:13:51 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.178 [2024-06-09 23:13:51.329810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 23:13:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.178 [2024-06-09 23:13:51.330324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 23:13:51 -- common/autotest_common.sh@10 -- # set +x 00:31:23.178 [2024-06-09 23:13:51.330335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.330923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.331204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.331215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.331839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.332393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.332410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.332740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.333042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.333052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.333535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.333891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.333898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.334408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.334884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.334892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.335415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.335916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.335924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.336190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.336473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.178 [2024-06-09 23:13:51.336481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd284000b90 with addr=10.0.0.2, port=4420 00:31:23.178 qpair failed and we were unable to recover it. 00:31:23.178 [2024-06-09 23:13:51.336695] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.441 23:13:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.441 23:13:51 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:23.441 23:13:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:23.441 23:13:51 -- common/autotest_common.sh@10 -- # set +x 00:31:23.441 [2024-06-09 23:13:51.347370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.441 [2024-06-09 23:13:51.347549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.441 [2024-06-09 23:13:51.347565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.441 [2024-06-09 23:13:51.347572] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.441 [2024-06-09 23:13:51.347577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.441 [2024-06-09 23:13:51.347593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.441 qpair failed and we were unable to recover it. 00:31:23.441 23:13:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:23.441 23:13:51 -- host/target_disconnect.sh@58 -- # wait 110545 00:31:23.441 [2024-06-09 23:13:51.357260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.442 [2024-06-09 23:13:51.357357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.442 [2024-06-09 23:13:51.357371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.442 [2024-06-09 23:13:51.357377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.442 [2024-06-09 23:13:51.357382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.442 [2024-06-09 23:13:51.357395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.442 qpair failed and we were unable to recover it. 00:31:23.442 [2024-06-09 23:13:51.367273] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.442 [2024-06-09 23:13:51.367374] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.442 [2024-06-09 23:13:51.367389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.442 [2024-06-09 23:13:51.367395] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.442 [2024-06-09 23:13:51.367400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.442 [2024-06-09 23:13:51.367417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.442 qpair failed and we were unable to recover it. 00:31:23.442 [2024-06-09 23:13:51.377234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.442 [2024-06-09 23:13:51.377338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.442 [2024-06-09 23:13:51.377352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.442 [2024-06-09 23:13:51.377363] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.442 [2024-06-09 23:13:51.377367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.442 [2024-06-09 23:13:51.377380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.442 qpair failed and we were unable to recover it. 00:31:23.442 [2024-06-09 23:13:51.387332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.442 [2024-06-09 23:13:51.387444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.442 [2024-06-09 23:13:51.387459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.442 [2024-06-09 23:13:51.387465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.442 [2024-06-09 23:13:51.387470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.442 [2024-06-09 23:13:51.387483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.442 qpair failed and we were unable to recover it. 00:31:23.442 [2024-06-09 23:13:51.397252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.442 [2024-06-09 23:13:51.397346] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.442 [2024-06-09 23:13:51.397360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.442 [2024-06-09 23:13:51.397366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.442 [2024-06-09 23:13:51.397370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.442 [2024-06-09 23:13:51.397383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.442 qpair failed and we were unable to recover it. 00:31:23.442 [2024-06-09 23:13:51.407231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.442 [2024-06-09 23:13:51.407368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.442 [2024-06-09 23:13:51.407382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.442 [2024-06-09 23:13:51.407387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.442 [2024-06-09 23:13:51.407392] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.442 [2024-06-09 23:13:51.407410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.442 qpair failed and we were unable to recover it. 00:31:23.442 [2024-06-09 23:13:51.417303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.442 [2024-06-09 23:13:51.417396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.442 [2024-06-09 23:13:51.417414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.442 [2024-06-09 23:13:51.417420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.442 [2024-06-09 23:13:51.417425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.442 [2024-06-09 23:13:51.417437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.442 qpair failed and we were unable to recover it. 00:31:23.442 [2024-06-09 23:13:51.427631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.442 [2024-06-09 23:13:51.427745] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.442 [2024-06-09 23:13:51.427758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.442 [2024-06-09 23:13:51.427764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.442 [2024-06-09 23:13:51.427768] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.442 [2024-06-09 23:13:51.427780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.442 qpair failed and we were unable to recover it. 00:31:23.442 [2024-06-09 23:13:51.437278] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.442 [2024-06-09 23:13:51.437370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.442 [2024-06-09 23:13:51.437385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.442 [2024-06-09 23:13:51.437390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.442 [2024-06-09 23:13:51.437395] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.442 [2024-06-09 23:13:51.437414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.442 qpair failed and we were unable to recover it. 00:31:23.442 [2024-06-09 23:13:51.447419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.442 [2024-06-09 23:13:51.447511] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.442 [2024-06-09 23:13:51.447525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.442 [2024-06-09 23:13:51.447531] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.442 [2024-06-09 23:13:51.447536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.442 [2024-06-09 23:13:51.447548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.442 qpair failed and we were unable to recover it. 00:31:23.442 [2024-06-09 23:13:51.457471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.442 [2024-06-09 23:13:51.457568] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.442 [2024-06-09 23:13:51.457581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.442 [2024-06-09 23:13:51.457588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.442 [2024-06-09 23:13:51.457592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.442 [2024-06-09 23:13:51.457604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.442 qpair failed and we were unable to recover it. 00:31:23.442 [2024-06-09 23:13:51.467548] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.442 [2024-06-09 23:13:51.467655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.442 [2024-06-09 23:13:51.467668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.442 [2024-06-09 23:13:51.467678] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.442 [2024-06-09 23:13:51.467682] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.442 [2024-06-09 23:13:51.467694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.442 qpair failed and we were unable to recover it. 00:31:23.442 [2024-06-09 23:13:51.477505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.442 [2024-06-09 23:13:51.477595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.442 [2024-06-09 23:13:51.477609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.442 [2024-06-09 23:13:51.477614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.442 [2024-06-09 23:13:51.477619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.442 [2024-06-09 23:13:51.477631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.442 qpair failed and we were unable to recover it. 00:31:23.442 [2024-06-09 23:13:51.487545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.442 [2024-06-09 23:13:51.487640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.442 [2024-06-09 23:13:51.487654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.443 [2024-06-09 23:13:51.487660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.443 [2024-06-09 23:13:51.487664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.443 [2024-06-09 23:13:51.487677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.443 qpair failed and we were unable to recover it. 00:31:23.443 [2024-06-09 23:13:51.497614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.443 [2024-06-09 23:13:51.497710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.443 [2024-06-09 23:13:51.497723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.443 [2024-06-09 23:13:51.497729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.443 [2024-06-09 23:13:51.497734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.443 [2024-06-09 23:13:51.497746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.443 qpair failed and we were unable to recover it. 00:31:23.443 [2024-06-09 23:13:51.507660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.443 [2024-06-09 23:13:51.507761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.443 [2024-06-09 23:13:51.507774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.443 [2024-06-09 23:13:51.507780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.443 [2024-06-09 23:13:51.507784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.443 [2024-06-09 23:13:51.507796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.443 qpair failed and we were unable to recover it. 00:31:23.443 [2024-06-09 23:13:51.517681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.443 [2024-06-09 23:13:51.517778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.443 [2024-06-09 23:13:51.517792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.443 [2024-06-09 23:13:51.517798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.443 [2024-06-09 23:13:51.517802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.443 [2024-06-09 23:13:51.517814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.443 qpair failed and we were unable to recover it. 00:31:23.443 [2024-06-09 23:13:51.527667] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.443 [2024-06-09 23:13:51.527760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.443 [2024-06-09 23:13:51.527774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.443 [2024-06-09 23:13:51.527780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.443 [2024-06-09 23:13:51.527784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.443 [2024-06-09 23:13:51.527798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.443 qpair failed and we were unable to recover it. 00:31:23.443 [2024-06-09 23:13:51.537698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.443 [2024-06-09 23:13:51.537793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.443 [2024-06-09 23:13:51.537807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.443 [2024-06-09 23:13:51.537813] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.443 [2024-06-09 23:13:51.537818] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.443 [2024-06-09 23:13:51.537830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.443 qpair failed and we were unable to recover it. 00:31:23.443 [2024-06-09 23:13:51.547887] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.443 [2024-06-09 23:13:51.548090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.443 [2024-06-09 23:13:51.548103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.443 [2024-06-09 23:13:51.548109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.443 [2024-06-09 23:13:51.548113] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.443 [2024-06-09 23:13:51.548125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.443 qpair failed and we were unable to recover it. 00:31:23.443 [2024-06-09 23:13:51.557833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.443 [2024-06-09 23:13:51.557966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.443 [2024-06-09 23:13:51.557983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.443 [2024-06-09 23:13:51.557988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.443 [2024-06-09 23:13:51.557993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.443 [2024-06-09 23:13:51.558004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.443 qpair failed and we were unable to recover it. 00:31:23.443 [2024-06-09 23:13:51.567763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.443 [2024-06-09 23:13:51.567866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.443 [2024-06-09 23:13:51.567885] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.443 [2024-06-09 23:13:51.567892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.443 [2024-06-09 23:13:51.567897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.443 [2024-06-09 23:13:51.567914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.443 qpair failed and we were unable to recover it. 00:31:23.443 [2024-06-09 23:13:51.577786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.443 [2024-06-09 23:13:51.577879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.443 [2024-06-09 23:13:51.577894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.443 [2024-06-09 23:13:51.577900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.443 [2024-06-09 23:13:51.577905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.443 [2024-06-09 23:13:51.577917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.443 qpair failed and we were unable to recover it. 00:31:23.443 [2024-06-09 23:13:51.587916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.443 [2024-06-09 23:13:51.588019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.443 [2024-06-09 23:13:51.588034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.443 [2024-06-09 23:13:51.588040] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.443 [2024-06-09 23:13:51.588044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.443 [2024-06-09 23:13:51.588057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.443 qpair failed and we were unable to recover it. 00:31:23.443 [2024-06-09 23:13:51.597833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.443 [2024-06-09 23:13:51.597932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.443 [2024-06-09 23:13:51.597952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.443 [2024-06-09 23:13:51.597959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.443 [2024-06-09 23:13:51.597963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.443 [2024-06-09 23:13:51.597982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.443 qpair failed and we were unable to recover it. 00:31:23.443 [2024-06-09 23:13:51.607862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.443 [2024-06-09 23:13:51.607959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.443 [2024-06-09 23:13:51.607974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.443 [2024-06-09 23:13:51.607979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.443 [2024-06-09 23:13:51.607984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.443 [2024-06-09 23:13:51.607997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.443 qpair failed and we were unable to recover it. 00:31:23.443 [2024-06-09 23:13:51.618022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.443 [2024-06-09 23:13:51.618171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.443 [2024-06-09 23:13:51.618192] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.443 [2024-06-09 23:13:51.618198] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.444 [2024-06-09 23:13:51.618203] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.444 [2024-06-09 23:13:51.618218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.444 qpair failed and we were unable to recover it. 00:31:23.706 [2024-06-09 23:13:51.628073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.706 [2024-06-09 23:13:51.628180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.706 [2024-06-09 23:13:51.628200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.706 [2024-06-09 23:13:51.628207] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.706 [2024-06-09 23:13:51.628212] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.706 [2024-06-09 23:13:51.628228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.706 qpair failed and we were unable to recover it. 00:31:23.706 [2024-06-09 23:13:51.638025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.706 [2024-06-09 23:13:51.638122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.706 [2024-06-09 23:13:51.638141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.706 [2024-06-09 23:13:51.638148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.706 [2024-06-09 23:13:51.638153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.706 [2024-06-09 23:13:51.638168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.706 qpair failed and we were unable to recover it. 00:31:23.706 [2024-06-09 23:13:51.648054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.706 [2024-06-09 23:13:51.648150] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.706 [2024-06-09 23:13:51.648174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.706 [2024-06-09 23:13:51.648181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.706 [2024-06-09 23:13:51.648186] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.706 [2024-06-09 23:13:51.648202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.706 qpair failed and we were unable to recover it. 00:31:23.706 [2024-06-09 23:13:51.658014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.706 [2024-06-09 23:13:51.658111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.706 [2024-06-09 23:13:51.658130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.706 [2024-06-09 23:13:51.658137] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.706 [2024-06-09 23:13:51.658141] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.706 [2024-06-09 23:13:51.658158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.706 qpair failed and we were unable to recover it. 00:31:23.706 [2024-06-09 23:13:51.668111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.706 [2024-06-09 23:13:51.668219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.706 [2024-06-09 23:13:51.668233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.706 [2024-06-09 23:13:51.668239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.706 [2024-06-09 23:13:51.668244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.706 [2024-06-09 23:13:51.668257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.706 qpair failed and we were unable to recover it. 00:31:23.706 [2024-06-09 23:13:51.678075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.706 [2024-06-09 23:13:51.678175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.706 [2024-06-09 23:13:51.678195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.706 [2024-06-09 23:13:51.678202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.706 [2024-06-09 23:13:51.678207] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.706 [2024-06-09 23:13:51.678223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.706 qpair failed and we were unable to recover it. 00:31:23.706 [2024-06-09 23:13:51.688062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.706 [2024-06-09 23:13:51.688167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.706 [2024-06-09 23:13:51.688183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.706 [2024-06-09 23:13:51.688189] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.706 [2024-06-09 23:13:51.688194] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.706 [2024-06-09 23:13:51.688215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.706 qpair failed and we were unable to recover it. 00:31:23.706 [2024-06-09 23:13:51.698085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.706 [2024-06-09 23:13:51.698179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.706 [2024-06-09 23:13:51.698194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.706 [2024-06-09 23:13:51.698199] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.706 [2024-06-09 23:13:51.698204] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.706 [2024-06-09 23:13:51.698217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.706 qpair failed and we were unable to recover it. 00:31:23.706 [2024-06-09 23:13:51.708176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.706 [2024-06-09 23:13:51.708285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.706 [2024-06-09 23:13:51.708299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.706 [2024-06-09 23:13:51.708305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.706 [2024-06-09 23:13:51.708310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.706 [2024-06-09 23:13:51.708323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.706 qpair failed and we were unable to recover it. 00:31:23.706 [2024-06-09 23:13:51.718197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.706 [2024-06-09 23:13:51.718294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.706 [2024-06-09 23:13:51.718308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.706 [2024-06-09 23:13:51.718313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.706 [2024-06-09 23:13:51.718318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.706 [2024-06-09 23:13:51.718330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.706 qpair failed and we were unable to recover it. 00:31:23.706 [2024-06-09 23:13:51.728154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.706 [2024-06-09 23:13:51.728292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.706 [2024-06-09 23:13:51.728306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.706 [2024-06-09 23:13:51.728311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.706 [2024-06-09 23:13:51.728316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.706 [2024-06-09 23:13:51.728328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.706 qpair failed and we were unable to recover it. 00:31:23.706 [2024-06-09 23:13:51.738240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.706 [2024-06-09 23:13:51.738339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.706 [2024-06-09 23:13:51.738368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.706 [2024-06-09 23:13:51.738374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.706 [2024-06-09 23:13:51.738378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.706 [2024-06-09 23:13:51.738391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.706 qpair failed and we were unable to recover it. 00:31:23.706 [2024-06-09 23:13:51.748210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.706 [2024-06-09 23:13:51.748312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.706 [2024-06-09 23:13:51.748325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.706 [2024-06-09 23:13:51.748331] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.706 [2024-06-09 23:13:51.748336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.706 [2024-06-09 23:13:51.748349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.706 qpair failed and we were unable to recover it. 00:31:23.706 [2024-06-09 23:13:51.758312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.706 [2024-06-09 23:13:51.758407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.707 [2024-06-09 23:13:51.758421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.707 [2024-06-09 23:13:51.758427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.707 [2024-06-09 23:13:51.758432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.707 [2024-06-09 23:13:51.758444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.707 qpair failed and we were unable to recover it. 00:31:23.707 [2024-06-09 23:13:51.768309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.707 [2024-06-09 23:13:51.768412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.707 [2024-06-09 23:13:51.768426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.707 [2024-06-09 23:13:51.768432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.707 [2024-06-09 23:13:51.768437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.707 [2024-06-09 23:13:51.768449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.707 qpair failed and we were unable to recover it. 00:31:23.707 [2024-06-09 23:13:51.778311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.707 [2024-06-09 23:13:51.778411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.707 [2024-06-09 23:13:51.778424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.707 [2024-06-09 23:13:51.778430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.707 [2024-06-09 23:13:51.778438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.707 [2024-06-09 23:13:51.778450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.707 qpair failed and we were unable to recover it. 00:31:23.707 [2024-06-09 23:13:51.788470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.707 [2024-06-09 23:13:51.788578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.707 [2024-06-09 23:13:51.788592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.707 [2024-06-09 23:13:51.788599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.707 [2024-06-09 23:13:51.788604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.707 [2024-06-09 23:13:51.788616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.707 qpair failed and we were unable to recover it. 00:31:23.707 [2024-06-09 23:13:51.798409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.707 [2024-06-09 23:13:51.798498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.707 [2024-06-09 23:13:51.798512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.707 [2024-06-09 23:13:51.798517] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.707 [2024-06-09 23:13:51.798522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.707 [2024-06-09 23:13:51.798534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.707 qpair failed and we were unable to recover it. 00:31:23.707 [2024-06-09 23:13:51.808366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.707 [2024-06-09 23:13:51.808460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.707 [2024-06-09 23:13:51.808474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.707 [2024-06-09 23:13:51.808480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.707 [2024-06-09 23:13:51.808485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.707 [2024-06-09 23:13:51.808497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.707 qpair failed and we were unable to recover it. 00:31:23.707 [2024-06-09 23:13:51.818632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.707 [2024-06-09 23:13:51.818726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.707 [2024-06-09 23:13:51.818739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.707 [2024-06-09 23:13:51.818745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.707 [2024-06-09 23:13:51.818749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.707 [2024-06-09 23:13:51.818761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.707 qpair failed and we were unable to recover it. 00:31:23.707 [2024-06-09 23:13:51.828540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.707 [2024-06-09 23:13:51.828655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.707 [2024-06-09 23:13:51.828669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.707 [2024-06-09 23:13:51.828674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.707 [2024-06-09 23:13:51.828680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.707 [2024-06-09 23:13:51.828692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.707 qpair failed and we were unable to recover it. 00:31:23.707 [2024-06-09 23:13:51.838522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.707 [2024-06-09 23:13:51.838618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.707 [2024-06-09 23:13:51.838632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.707 [2024-06-09 23:13:51.838637] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.707 [2024-06-09 23:13:51.838642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.707 [2024-06-09 23:13:51.838655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.707 qpair failed and we were unable to recover it. 00:31:23.707 [2024-06-09 23:13:51.848550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.707 [2024-06-09 23:13:51.848642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.707 [2024-06-09 23:13:51.848656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.707 [2024-06-09 23:13:51.848661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.707 [2024-06-09 23:13:51.848666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.707 [2024-06-09 23:13:51.848678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.707 qpair failed and we were unable to recover it. 00:31:23.707 [2024-06-09 23:13:51.858747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.707 [2024-06-09 23:13:51.858842] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.707 [2024-06-09 23:13:51.858856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.707 [2024-06-09 23:13:51.858861] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.707 [2024-06-09 23:13:51.858866] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.707 [2024-06-09 23:13:51.858878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.707 qpair failed and we were unable to recover it. 00:31:23.707 [2024-06-09 23:13:51.868672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.707 [2024-06-09 23:13:51.868771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.707 [2024-06-09 23:13:51.868784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.707 [2024-06-09 23:13:51.868790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.707 [2024-06-09 23:13:51.868797] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.707 [2024-06-09 23:13:51.868809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.707 qpair failed and we were unable to recover it. 00:31:23.707 [2024-06-09 23:13:51.878668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.707 [2024-06-09 23:13:51.878761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.707 [2024-06-09 23:13:51.878774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.707 [2024-06-09 23:13:51.878780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.707 [2024-06-09 23:13:51.878784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.707 [2024-06-09 23:13:51.878796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.707 qpair failed and we were unable to recover it. 00:31:23.969 [2024-06-09 23:13:51.888641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.969 [2024-06-09 23:13:51.888732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.969 [2024-06-09 23:13:51.888748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.969 [2024-06-09 23:13:51.888754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.969 [2024-06-09 23:13:51.888759] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.969 [2024-06-09 23:13:51.888772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.969 qpair failed and we were unable to recover it. 00:31:23.969 [2024-06-09 23:13:51.898715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.969 [2024-06-09 23:13:51.898809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.969 [2024-06-09 23:13:51.898822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.969 [2024-06-09 23:13:51.898828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.969 [2024-06-09 23:13:51.898833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.969 [2024-06-09 23:13:51.898844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.969 qpair failed and we were unable to recover it. 00:31:23.969 [2024-06-09 23:13:51.908797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.969 [2024-06-09 23:13:51.908902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.969 [2024-06-09 23:13:51.908916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.969 [2024-06-09 23:13:51.908921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.969 [2024-06-09 23:13:51.908926] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.969 [2024-06-09 23:13:51.908937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.969 qpair failed and we were unable to recover it. 00:31:23.969 [2024-06-09 23:13:51.918765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.969 [2024-06-09 23:13:51.918854] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.969 [2024-06-09 23:13:51.918867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.969 [2024-06-09 23:13:51.918873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.969 [2024-06-09 23:13:51.918878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.969 [2024-06-09 23:13:51.918890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.969 qpair failed and we were unable to recover it. 00:31:23.969 [2024-06-09 23:13:51.928789] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.969 [2024-06-09 23:13:51.928891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.969 [2024-06-09 23:13:51.928905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.969 [2024-06-09 23:13:51.928911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.969 [2024-06-09 23:13:51.928915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.969 [2024-06-09 23:13:51.928927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.969 qpair failed and we were unable to recover it. 00:31:23.969 [2024-06-09 23:13:51.938689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.969 [2024-06-09 23:13:51.938804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.969 [2024-06-09 23:13:51.938819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.969 [2024-06-09 23:13:51.938824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.969 [2024-06-09 23:13:51.938829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.969 [2024-06-09 23:13:51.938840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.969 qpair failed and we were unable to recover it. 00:31:23.969 [2024-06-09 23:13:51.948781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.969 [2024-06-09 23:13:51.948883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.969 [2024-06-09 23:13:51.948897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.969 [2024-06-09 23:13:51.948903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.969 [2024-06-09 23:13:51.948909] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.969 [2024-06-09 23:13:51.948920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.969 qpair failed and we were unable to recover it. 00:31:23.969 [2024-06-09 23:13:51.958809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.969 [2024-06-09 23:13:51.958898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.969 [2024-06-09 23:13:51.958913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.969 [2024-06-09 23:13:51.958923] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.969 [2024-06-09 23:13:51.958927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.969 [2024-06-09 23:13:51.958941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.969 qpair failed and we were unable to recover it. 00:31:23.969 [2024-06-09 23:13:51.968890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.969 [2024-06-09 23:13:51.969030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.969 [2024-06-09 23:13:51.969044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.969 [2024-06-09 23:13:51.969050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.969 [2024-06-09 23:13:51.969054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.969 [2024-06-09 23:13:51.969067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.969 qpair failed and we were unable to recover it. 00:31:23.969 [2024-06-09 23:13:51.978933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.969 [2024-06-09 23:13:51.979027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.969 [2024-06-09 23:13:51.979042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.969 [2024-06-09 23:13:51.979047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.969 [2024-06-09 23:13:51.979052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.969 [2024-06-09 23:13:51.979064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.969 qpair failed and we were unable to recover it. 00:31:23.969 [2024-06-09 23:13:51.988927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.969 [2024-06-09 23:13:51.989064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.969 [2024-06-09 23:13:51.989084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.969 [2024-06-09 23:13:51.989091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.969 [2024-06-09 23:13:51.989096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.969 [2024-06-09 23:13:51.989112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.969 qpair failed and we were unable to recover it. 00:31:23.969 [2024-06-09 23:13:51.999002] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.969 [2024-06-09 23:13:51.999104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.969 [2024-06-09 23:13:51.999125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.969 [2024-06-09 23:13:51.999131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.969 [2024-06-09 23:13:51.999137] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.969 [2024-06-09 23:13:51.999155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.969 qpair failed and we were unable to recover it. 00:31:23.969 [2024-06-09 23:13:52.009033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.969 [2024-06-09 23:13:52.009134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.969 [2024-06-09 23:13:52.009155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.969 [2024-06-09 23:13:52.009161] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.969 [2024-06-09 23:13:52.009166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.969 [2024-06-09 23:13:52.009182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.969 qpair failed and we were unable to recover it. 00:31:23.969 [2024-06-09 23:13:52.019036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.969 [2024-06-09 23:13:52.019139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.969 [2024-06-09 23:13:52.019159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.969 [2024-06-09 23:13:52.019166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.970 [2024-06-09 23:13:52.019171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.970 [2024-06-09 23:13:52.019187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.970 qpair failed and we were unable to recover it. 00:31:23.970 [2024-06-09 23:13:52.029200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.970 [2024-06-09 23:13:52.029308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.970 [2024-06-09 23:13:52.029329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.970 [2024-06-09 23:13:52.029335] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.970 [2024-06-09 23:13:52.029340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.970 [2024-06-09 23:13:52.029356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.970 qpair failed and we were unable to recover it. 00:31:23.970 [2024-06-09 23:13:52.039116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.970 [2024-06-09 23:13:52.039214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.970 [2024-06-09 23:13:52.039229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.970 [2024-06-09 23:13:52.039235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.970 [2024-06-09 23:13:52.039240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.970 [2024-06-09 23:13:52.039253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.970 qpair failed and we were unable to recover it. 00:31:23.970 [2024-06-09 23:13:52.049041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.970 [2024-06-09 23:13:52.049133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.970 [2024-06-09 23:13:52.049152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.970 [2024-06-09 23:13:52.049158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.970 [2024-06-09 23:13:52.049164] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.970 [2024-06-09 23:13:52.049176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.970 qpair failed and we were unable to recover it. 00:31:23.970 [2024-06-09 23:13:52.059176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.970 [2024-06-09 23:13:52.059276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.970 [2024-06-09 23:13:52.059291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.970 [2024-06-09 23:13:52.059297] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.970 [2024-06-09 23:13:52.059301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.970 [2024-06-09 23:13:52.059314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.970 qpair failed and we were unable to recover it. 00:31:23.970 [2024-06-09 23:13:52.069256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.970 [2024-06-09 23:13:52.069356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.970 [2024-06-09 23:13:52.069371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.970 [2024-06-09 23:13:52.069376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.970 [2024-06-09 23:13:52.069381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.970 [2024-06-09 23:13:52.069393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.970 qpair failed and we were unable to recover it. 00:31:23.970 [2024-06-09 23:13:52.079235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.970 [2024-06-09 23:13:52.079329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.970 [2024-06-09 23:13:52.079343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.970 [2024-06-09 23:13:52.079349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.970 [2024-06-09 23:13:52.079354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.970 [2024-06-09 23:13:52.079366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.970 qpair failed and we were unable to recover it. 00:31:23.970 [2024-06-09 23:13:52.089267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.970 [2024-06-09 23:13:52.089397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.970 [2024-06-09 23:13:52.089418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.970 [2024-06-09 23:13:52.089424] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.970 [2024-06-09 23:13:52.089428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.970 [2024-06-09 23:13:52.089440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.970 qpair failed and we were unable to recover it. 00:31:23.970 [2024-06-09 23:13:52.099259] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.970 [2024-06-09 23:13:52.099354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.970 [2024-06-09 23:13:52.099368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.970 [2024-06-09 23:13:52.099374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.970 [2024-06-09 23:13:52.099379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.970 [2024-06-09 23:13:52.099391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.970 qpair failed and we were unable to recover it. 00:31:23.970 [2024-06-09 23:13:52.109319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.970 [2024-06-09 23:13:52.109420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.970 [2024-06-09 23:13:52.109434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.970 [2024-06-09 23:13:52.109440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.970 [2024-06-09 23:13:52.109444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.970 [2024-06-09 23:13:52.109457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.970 qpair failed and we were unable to recover it. 00:31:23.970 [2024-06-09 23:13:52.119383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.970 [2024-06-09 23:13:52.119478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.970 [2024-06-09 23:13:52.119492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.970 [2024-06-09 23:13:52.119497] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.970 [2024-06-09 23:13:52.119502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.970 [2024-06-09 23:13:52.119514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.970 qpair failed and we were unable to recover it. 00:31:23.970 [2024-06-09 23:13:52.129355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.970 [2024-06-09 23:13:52.129449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.970 [2024-06-09 23:13:52.129463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.970 [2024-06-09 23:13:52.129469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.970 [2024-06-09 23:13:52.129474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.970 [2024-06-09 23:13:52.129486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.970 qpair failed and we were unable to recover it. 00:31:23.970 [2024-06-09 23:13:52.139404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.970 [2024-06-09 23:13:52.139494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.970 [2024-06-09 23:13:52.139512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.970 [2024-06-09 23:13:52.139518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.970 [2024-06-09 23:13:52.139522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:23.970 [2024-06-09 23:13:52.139534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.970 qpair failed and we were unable to recover it. 00:31:24.232 [2024-06-09 23:13:52.149425] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.232 [2024-06-09 23:13:52.149555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.232 [2024-06-09 23:13:52.149568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.232 [2024-06-09 23:13:52.149574] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.232 [2024-06-09 23:13:52.149579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.232 [2024-06-09 23:13:52.149592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.232 qpair failed and we were unable to recover it. 00:31:24.232 [2024-06-09 23:13:52.159502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.232 [2024-06-09 23:13:52.159593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.232 [2024-06-09 23:13:52.159606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.232 [2024-06-09 23:13:52.159612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.232 [2024-06-09 23:13:52.159617] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.232 [2024-06-09 23:13:52.159629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.232 qpair failed and we were unable to recover it. 00:31:24.232 [2024-06-09 23:13:52.169445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.232 [2024-06-09 23:13:52.169559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.232 [2024-06-09 23:13:52.169572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.232 [2024-06-09 23:13:52.169578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.232 [2024-06-09 23:13:52.169582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.232 [2024-06-09 23:13:52.169594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.232 qpair failed and we were unable to recover it. 00:31:24.232 [2024-06-09 23:13:52.179528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.232 [2024-06-09 23:13:52.179619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.232 [2024-06-09 23:13:52.179632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.232 [2024-06-09 23:13:52.179638] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.232 [2024-06-09 23:13:52.179643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.232 [2024-06-09 23:13:52.179658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.232 qpair failed and we were unable to recover it. 00:31:24.232 [2024-06-09 23:13:52.189530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.232 [2024-06-09 23:13:52.189633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.232 [2024-06-09 23:13:52.189647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.232 [2024-06-09 23:13:52.189653] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.232 [2024-06-09 23:13:52.189657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.232 [2024-06-09 23:13:52.189670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.232 qpair failed and we were unable to recover it. 00:31:24.232 [2024-06-09 23:13:52.199555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.232 [2024-06-09 23:13:52.199649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.232 [2024-06-09 23:13:52.199662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.232 [2024-06-09 23:13:52.199668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.232 [2024-06-09 23:13:52.199672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.232 [2024-06-09 23:13:52.199685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.232 qpair failed and we were unable to recover it. 00:31:24.232 [2024-06-09 23:13:52.209616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.232 [2024-06-09 23:13:52.209708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.232 [2024-06-09 23:13:52.209722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.232 [2024-06-09 23:13:52.209728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.232 [2024-06-09 23:13:52.209733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.232 [2024-06-09 23:13:52.209744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.232 qpair failed and we were unable to recover it. 00:31:24.232 [2024-06-09 23:13:52.219660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.232 [2024-06-09 23:13:52.219778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.232 [2024-06-09 23:13:52.219791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.232 [2024-06-09 23:13:52.219797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.232 [2024-06-09 23:13:52.219801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.232 [2024-06-09 23:13:52.219813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.232 qpair failed and we were unable to recover it. 00:31:24.232 [2024-06-09 23:13:52.229646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.232 [2024-06-09 23:13:52.229748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.232 [2024-06-09 23:13:52.229764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.232 [2024-06-09 23:13:52.229770] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.232 [2024-06-09 23:13:52.229774] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.232 [2024-06-09 23:13:52.229786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.232 qpair failed and we were unable to recover it. 00:31:24.232 [2024-06-09 23:13:52.239698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.232 [2024-06-09 23:13:52.239789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.232 [2024-06-09 23:13:52.239803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.232 [2024-06-09 23:13:52.239809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.232 [2024-06-09 23:13:52.239813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.232 [2024-06-09 23:13:52.239825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.232 qpair failed and we were unable to recover it. 00:31:24.232 [2024-06-09 23:13:52.249608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.232 [2024-06-09 23:13:52.249708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.232 [2024-06-09 23:13:52.249722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.232 [2024-06-09 23:13:52.249727] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.232 [2024-06-09 23:13:52.249732] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.232 [2024-06-09 23:13:52.249744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.232 qpair failed and we were unable to recover it. 00:31:24.232 [2024-06-09 23:13:52.259808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.232 [2024-06-09 23:13:52.259928] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.232 [2024-06-09 23:13:52.259941] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.232 [2024-06-09 23:13:52.259947] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.232 [2024-06-09 23:13:52.259951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.232 [2024-06-09 23:13:52.259963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.232 qpair failed and we were unable to recover it. 00:31:24.232 [2024-06-09 23:13:52.269780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.232 [2024-06-09 23:13:52.269872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.232 [2024-06-09 23:13:52.269885] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.232 [2024-06-09 23:13:52.269891] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.232 [2024-06-09 23:13:52.269898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.232 [2024-06-09 23:13:52.269910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.232 qpair failed and we were unable to recover it. 00:31:24.232 [2024-06-09 23:13:52.279794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.232 [2024-06-09 23:13:52.279892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.232 [2024-06-09 23:13:52.279912] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.232 [2024-06-09 23:13:52.279918] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.232 [2024-06-09 23:13:52.279922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.232 [2024-06-09 23:13:52.279938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.232 qpair failed and we were unable to recover it. 00:31:24.232 [2024-06-09 23:13:52.289831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.232 [2024-06-09 23:13:52.289927] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.232 [2024-06-09 23:13:52.289942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.233 [2024-06-09 23:13:52.289947] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.233 [2024-06-09 23:13:52.289952] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.233 [2024-06-09 23:13:52.289964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.233 qpair failed and we were unable to recover it. 00:31:24.233 [2024-06-09 23:13:52.299861] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.233 [2024-06-09 23:13:52.299957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.233 [2024-06-09 23:13:52.299977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.233 [2024-06-09 23:13:52.299983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.233 [2024-06-09 23:13:52.299988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.233 [2024-06-09 23:13:52.300003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.233 qpair failed and we were unable to recover it. 00:31:24.233 [2024-06-09 23:13:52.309949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.233 [2024-06-09 23:13:52.310091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.233 [2024-06-09 23:13:52.310106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.233 [2024-06-09 23:13:52.310111] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.233 [2024-06-09 23:13:52.310116] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.233 [2024-06-09 23:13:52.310128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.233 qpair failed and we were unable to recover it. 00:31:24.233 [2024-06-09 23:13:52.319907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.233 [2024-06-09 23:13:52.320012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.233 [2024-06-09 23:13:52.320032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.233 [2024-06-09 23:13:52.320038] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.233 [2024-06-09 23:13:52.320043] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.233 [2024-06-09 23:13:52.320058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.233 qpair failed and we were unable to recover it. 00:31:24.233 [2024-06-09 23:13:52.329913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.233 [2024-06-09 23:13:52.330010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.233 [2024-06-09 23:13:52.330030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.233 [2024-06-09 23:13:52.330036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.233 [2024-06-09 23:13:52.330041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.233 [2024-06-09 23:13:52.330057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.233 qpair failed and we were unable to recover it. 00:31:24.233 [2024-06-09 23:13:52.339970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.233 [2024-06-09 23:13:52.340085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.233 [2024-06-09 23:13:52.340105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.233 [2024-06-09 23:13:52.340111] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.233 [2024-06-09 23:13:52.340117] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.233 [2024-06-09 23:13:52.340133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.233 qpair failed and we were unable to recover it. 00:31:24.233 [2024-06-09 23:13:52.349988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.233 [2024-06-09 23:13:52.350097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.233 [2024-06-09 23:13:52.350117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.233 [2024-06-09 23:13:52.350123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.233 [2024-06-09 23:13:52.350128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.233 [2024-06-09 23:13:52.350144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.233 qpair failed and we were unable to recover it. 00:31:24.233 [2024-06-09 23:13:52.359955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.233 [2024-06-09 23:13:52.360062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.233 [2024-06-09 23:13:52.360082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.233 [2024-06-09 23:13:52.360089] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.233 [2024-06-09 23:13:52.360099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.233 [2024-06-09 23:13:52.360116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.233 qpair failed and we were unable to recover it. 00:31:24.233 [2024-06-09 23:13:52.370051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.233 [2024-06-09 23:13:52.370147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.233 [2024-06-09 23:13:52.370168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.233 [2024-06-09 23:13:52.370174] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.233 [2024-06-09 23:13:52.370181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.233 [2024-06-09 23:13:52.370197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.233 qpair failed and we were unable to recover it. 00:31:24.233 [2024-06-09 23:13:52.380105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.233 [2024-06-09 23:13:52.380202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.233 [2024-06-09 23:13:52.380221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.233 [2024-06-09 23:13:52.380228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.233 [2024-06-09 23:13:52.380233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.233 [2024-06-09 23:13:52.380248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.233 qpair failed and we were unable to recover it. 00:31:24.233 [2024-06-09 23:13:52.390168] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.233 [2024-06-09 23:13:52.390266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.233 [2024-06-09 23:13:52.390281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.233 [2024-06-09 23:13:52.390286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.233 [2024-06-09 23:13:52.390291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.233 [2024-06-09 23:13:52.390304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.233 qpair failed and we were unable to recover it. 00:31:24.233 [2024-06-09 23:13:52.400146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.233 [2024-06-09 23:13:52.400279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.233 [2024-06-09 23:13:52.400293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.233 [2024-06-09 23:13:52.400298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.233 [2024-06-09 23:13:52.400303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.233 [2024-06-09 23:13:52.400314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.233 qpair failed and we were unable to recover it. 00:31:24.495 [2024-06-09 23:13:52.410141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.495 [2024-06-09 23:13:52.410278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.495 [2024-06-09 23:13:52.410293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.495 [2024-06-09 23:13:52.410298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.495 [2024-06-09 23:13:52.410302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.495 [2024-06-09 23:13:52.410315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.495 qpair failed and we were unable to recover it. 00:31:24.495 [2024-06-09 23:13:52.420215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.495 [2024-06-09 23:13:52.420309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.495 [2024-06-09 23:13:52.420322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.495 [2024-06-09 23:13:52.420328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.495 [2024-06-09 23:13:52.420332] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.495 [2024-06-09 23:13:52.420344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.495 qpair failed and we were unable to recover it. 00:31:24.495 [2024-06-09 23:13:52.430252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.495 [2024-06-09 23:13:52.430367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.495 [2024-06-09 23:13:52.430380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.495 [2024-06-09 23:13:52.430386] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.495 [2024-06-09 23:13:52.430391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.495 [2024-06-09 23:13:52.430411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.495 qpair failed and we were unable to recover it. 00:31:24.495 [2024-06-09 23:13:52.440289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.495 [2024-06-09 23:13:52.440382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.495 [2024-06-09 23:13:52.440396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.495 [2024-06-09 23:13:52.440408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.495 [2024-06-09 23:13:52.440412] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.495 [2024-06-09 23:13:52.440425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.495 qpair failed and we were unable to recover it. 00:31:24.495 [2024-06-09 23:13:52.450266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.495 [2024-06-09 23:13:52.450415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.495 [2024-06-09 23:13:52.450428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.495 [2024-06-09 23:13:52.450437] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.495 [2024-06-09 23:13:52.450441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.495 [2024-06-09 23:13:52.450452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.495 qpair failed and we were unable to recover it. 00:31:24.495 [2024-06-09 23:13:52.460394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.495 [2024-06-09 23:13:52.460518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.495 [2024-06-09 23:13:52.460531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.495 [2024-06-09 23:13:52.460537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.495 [2024-06-09 23:13:52.460541] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.495 [2024-06-09 23:13:52.460554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.495 qpair failed and we were unable to recover it. 00:31:24.495 [2024-06-09 23:13:52.470378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.495 [2024-06-09 23:13:52.470485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.495 [2024-06-09 23:13:52.470499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.495 [2024-06-09 23:13:52.470504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.495 [2024-06-09 23:13:52.470509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.495 [2024-06-09 23:13:52.470521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.495 qpair failed and we were unable to recover it. 00:31:24.495 [2024-06-09 23:13:52.480361] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.495 [2024-06-09 23:13:52.480457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.495 [2024-06-09 23:13:52.480471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.495 [2024-06-09 23:13:52.480477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.495 [2024-06-09 23:13:52.480481] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.495 [2024-06-09 23:13:52.480493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.495 qpair failed and we were unable to recover it. 00:31:24.495 [2024-06-09 23:13:52.490398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.495 [2024-06-09 23:13:52.490495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.495 [2024-06-09 23:13:52.490509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.495 [2024-06-09 23:13:52.490514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.495 [2024-06-09 23:13:52.490518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.496 [2024-06-09 23:13:52.490530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.496 qpair failed and we were unable to recover it. 00:31:24.496 [2024-06-09 23:13:52.500464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.496 [2024-06-09 23:13:52.500562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.496 [2024-06-09 23:13:52.500576] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.496 [2024-06-09 23:13:52.500581] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.496 [2024-06-09 23:13:52.500586] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.496 [2024-06-09 23:13:52.500598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.496 qpair failed and we were unable to recover it. 00:31:24.496 [2024-06-09 23:13:52.510488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.496 [2024-06-09 23:13:52.510588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.496 [2024-06-09 23:13:52.510601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.496 [2024-06-09 23:13:52.510607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.496 [2024-06-09 23:13:52.510612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.496 [2024-06-09 23:13:52.510624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.496 qpair failed and we were unable to recover it. 00:31:24.496 [2024-06-09 23:13:52.520501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.496 [2024-06-09 23:13:52.520594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.496 [2024-06-09 23:13:52.520608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.496 [2024-06-09 23:13:52.520613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.496 [2024-06-09 23:13:52.520617] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.496 [2024-06-09 23:13:52.520629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.496 qpair failed and we were unable to recover it. 00:31:24.496 [2024-06-09 23:13:52.530539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.496 [2024-06-09 23:13:52.530656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.496 [2024-06-09 23:13:52.530670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.496 [2024-06-09 23:13:52.530675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.496 [2024-06-09 23:13:52.530679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.496 [2024-06-09 23:13:52.530691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.496 qpair failed and we were unable to recover it. 00:31:24.496 [2024-06-09 23:13:52.540574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.496 [2024-06-09 23:13:52.540670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.496 [2024-06-09 23:13:52.540684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.496 [2024-06-09 23:13:52.540693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.496 [2024-06-09 23:13:52.540697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.496 [2024-06-09 23:13:52.540709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.496 qpair failed and we were unable to recover it. 00:31:24.496 [2024-06-09 23:13:52.550628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.496 [2024-06-09 23:13:52.550733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.496 [2024-06-09 23:13:52.550746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.496 [2024-06-09 23:13:52.550751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.496 [2024-06-09 23:13:52.550755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.496 [2024-06-09 23:13:52.550768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.496 qpair failed and we were unable to recover it. 00:31:24.496 [2024-06-09 23:13:52.560632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.496 [2024-06-09 23:13:52.560720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.496 [2024-06-09 23:13:52.560733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.496 [2024-06-09 23:13:52.560739] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.496 [2024-06-09 23:13:52.560743] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.496 [2024-06-09 23:13:52.560755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.496 qpair failed and we were unable to recover it. 00:31:24.496 [2024-06-09 23:13:52.570653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.496 [2024-06-09 23:13:52.570741] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.496 [2024-06-09 23:13:52.570754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.496 [2024-06-09 23:13:52.570759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.496 [2024-06-09 23:13:52.570764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.496 [2024-06-09 23:13:52.570775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.496 qpair failed and we were unable to recover it. 00:31:24.496 [2024-06-09 23:13:52.580561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.496 [2024-06-09 23:13:52.580651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.496 [2024-06-09 23:13:52.580664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.496 [2024-06-09 23:13:52.580669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.496 [2024-06-09 23:13:52.580674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.496 [2024-06-09 23:13:52.580686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.496 qpair failed and we were unable to recover it. 00:31:24.496 [2024-06-09 23:13:52.590751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.496 [2024-06-09 23:13:52.590868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.496 [2024-06-09 23:13:52.590881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.496 [2024-06-09 23:13:52.590887] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.496 [2024-06-09 23:13:52.590891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.496 [2024-06-09 23:13:52.590903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.496 qpair failed and we were unable to recover it. 00:31:24.496 [2024-06-09 23:13:52.600901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.496 [2024-06-09 23:13:52.600994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.496 [2024-06-09 23:13:52.601007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.496 [2024-06-09 23:13:52.601012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.496 [2024-06-09 23:13:52.601016] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.496 [2024-06-09 23:13:52.601028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.496 qpair failed and we were unable to recover it. 00:31:24.496 [2024-06-09 23:13:52.610754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.496 [2024-06-09 23:13:52.610891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.496 [2024-06-09 23:13:52.610905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.496 [2024-06-09 23:13:52.610911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.496 [2024-06-09 23:13:52.610915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.496 [2024-06-09 23:13:52.610927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.496 qpair failed and we were unable to recover it. 00:31:24.496 [2024-06-09 23:13:52.620784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.496 [2024-06-09 23:13:52.620879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.496 [2024-06-09 23:13:52.620899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.496 [2024-06-09 23:13:52.620905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.496 [2024-06-09 23:13:52.620909] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.496 [2024-06-09 23:13:52.620925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.496 qpair failed and we were unable to recover it. 00:31:24.496 [2024-06-09 23:13:52.630821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.496 [2024-06-09 23:13:52.630920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.496 [2024-06-09 23:13:52.630938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.497 [2024-06-09 23:13:52.630944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.497 [2024-06-09 23:13:52.630948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.497 [2024-06-09 23:13:52.630961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.497 qpair failed and we were unable to recover it. 00:31:24.497 [2024-06-09 23:13:52.640832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.497 [2024-06-09 23:13:52.640927] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.497 [2024-06-09 23:13:52.640941] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.497 [2024-06-09 23:13:52.640946] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.497 [2024-06-09 23:13:52.640951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.497 [2024-06-09 23:13:52.640963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.497 qpair failed and we were unable to recover it. 00:31:24.497 [2024-06-09 23:13:52.650862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.497 [2024-06-09 23:13:52.650957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.497 [2024-06-09 23:13:52.650971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.497 [2024-06-09 23:13:52.650976] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.497 [2024-06-09 23:13:52.650981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.497 [2024-06-09 23:13:52.650993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.497 qpair failed and we were unable to recover it. 00:31:24.497 [2024-06-09 23:13:52.660887] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.497 [2024-06-09 23:13:52.660978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.497 [2024-06-09 23:13:52.660992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.497 [2024-06-09 23:13:52.660997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.497 [2024-06-09 23:13:52.661001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.497 [2024-06-09 23:13:52.661013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.497 qpair failed and we were unable to recover it. 00:31:24.497 [2024-06-09 23:13:52.670923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.497 [2024-06-09 23:13:52.671024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.497 [2024-06-09 23:13:52.671037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.497 [2024-06-09 23:13:52.671042] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.497 [2024-06-09 23:13:52.671047] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.497 [2024-06-09 23:13:52.671061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.497 qpair failed and we were unable to recover it. 00:31:24.759 [2024-06-09 23:13:52.681035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.759 [2024-06-09 23:13:52.681204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.759 [2024-06-09 23:13:52.681217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.759 [2024-06-09 23:13:52.681222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.759 [2024-06-09 23:13:52.681226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.759 [2024-06-09 23:13:52.681237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.759 qpair failed and we were unable to recover it. 00:31:24.759 [2024-06-09 23:13:52.690955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.759 [2024-06-09 23:13:52.691078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.759 [2024-06-09 23:13:52.691092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.759 [2024-06-09 23:13:52.691097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.759 [2024-06-09 23:13:52.691101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.759 [2024-06-09 23:13:52.691112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.759 qpair failed and we were unable to recover it. 00:31:24.759 [2024-06-09 23:13:52.701018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.759 [2024-06-09 23:13:52.701113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.759 [2024-06-09 23:13:52.701126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.759 [2024-06-09 23:13:52.701131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.759 [2024-06-09 23:13:52.701135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.759 [2024-06-09 23:13:52.701147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.759 qpair failed and we were unable to recover it. 00:31:24.759 [2024-06-09 23:13:52.711089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.759 [2024-06-09 23:13:52.711244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.759 [2024-06-09 23:13:52.711263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.759 [2024-06-09 23:13:52.711270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.760 [2024-06-09 23:13:52.711275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.760 [2024-06-09 23:13:52.711291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.760 qpair failed and we were unable to recover it. 00:31:24.760 [2024-06-09 23:13:52.721061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.760 [2024-06-09 23:13:52.721152] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.760 [2024-06-09 23:13:52.721170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.760 [2024-06-09 23:13:52.721176] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.760 [2024-06-09 23:13:52.721181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.760 [2024-06-09 23:13:52.721193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.760 qpair failed and we were unable to recover it. 00:31:24.760 [2024-06-09 23:13:52.731090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.760 [2024-06-09 23:13:52.731187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.760 [2024-06-09 23:13:52.731201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.760 [2024-06-09 23:13:52.731206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.760 [2024-06-09 23:13:52.731210] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.760 [2024-06-09 23:13:52.731223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.760 qpair failed and we were unable to recover it. 00:31:24.760 [2024-06-09 23:13:52.741133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.760 [2024-06-09 23:13:52.741225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.760 [2024-06-09 23:13:52.741239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.760 [2024-06-09 23:13:52.741244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.760 [2024-06-09 23:13:52.741248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.760 [2024-06-09 23:13:52.741260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.760 qpair failed and we were unable to recover it. 00:31:24.760 [2024-06-09 23:13:52.751163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.760 [2024-06-09 23:13:52.751267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.760 [2024-06-09 23:13:52.751287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.760 [2024-06-09 23:13:52.751293] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.760 [2024-06-09 23:13:52.751298] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.760 [2024-06-09 23:13:52.751314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.760 qpair failed and we were unable to recover it. 00:31:24.760 [2024-06-09 23:13:52.761192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.760 [2024-06-09 23:13:52.761304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.760 [2024-06-09 23:13:52.761319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.760 [2024-06-09 23:13:52.761325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.760 [2024-06-09 23:13:52.761329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.760 [2024-06-09 23:13:52.761346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.760 qpair failed and we were unable to recover it. 00:31:24.760 [2024-06-09 23:13:52.771189] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.760 [2024-06-09 23:13:52.771289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.760 [2024-06-09 23:13:52.771302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.760 [2024-06-09 23:13:52.771308] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.760 [2024-06-09 23:13:52.771312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.760 [2024-06-09 23:13:52.771324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.760 qpair failed and we were unable to recover it. 00:31:24.760 [2024-06-09 23:13:52.781219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.760 [2024-06-09 23:13:52.781313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.760 [2024-06-09 23:13:52.781326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.760 [2024-06-09 23:13:52.781332] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.760 [2024-06-09 23:13:52.781336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.760 [2024-06-09 23:13:52.781348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.760 qpair failed and we were unable to recover it. 00:31:24.760 [2024-06-09 23:13:52.791264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.760 [2024-06-09 23:13:52.791368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.760 [2024-06-09 23:13:52.791382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.760 [2024-06-09 23:13:52.791387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.760 [2024-06-09 23:13:52.791391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.760 [2024-06-09 23:13:52.791408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.760 qpair failed and we were unable to recover it. 00:31:24.760 [2024-06-09 23:13:52.801288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.760 [2024-06-09 23:13:52.801382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.760 [2024-06-09 23:13:52.801395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.760 [2024-06-09 23:13:52.801406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.760 [2024-06-09 23:13:52.801411] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.760 [2024-06-09 23:13:52.801423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.760 qpair failed and we were unable to recover it. 00:31:24.760 [2024-06-09 23:13:52.811285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.760 [2024-06-09 23:13:52.811377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.760 [2024-06-09 23:13:52.811391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.760 [2024-06-09 23:13:52.811396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.760 [2024-06-09 23:13:52.811404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.760 [2024-06-09 23:13:52.811417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.760 qpair failed and we were unable to recover it. 00:31:24.760 [2024-06-09 23:13:52.821355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.760 [2024-06-09 23:13:52.821451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.760 [2024-06-09 23:13:52.821464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.760 [2024-06-09 23:13:52.821470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.760 [2024-06-09 23:13:52.821474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.760 [2024-06-09 23:13:52.821486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.760 qpair failed and we were unable to recover it. 00:31:24.760 [2024-06-09 23:13:52.831283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.760 [2024-06-09 23:13:52.831386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.760 [2024-06-09 23:13:52.831400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.760 [2024-06-09 23:13:52.831409] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.760 [2024-06-09 23:13:52.831414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.760 [2024-06-09 23:13:52.831426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.760 qpair failed and we were unable to recover it. 00:31:24.760 [2024-06-09 23:13:52.841274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.760 [2024-06-09 23:13:52.841362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.760 [2024-06-09 23:13:52.841375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.760 [2024-06-09 23:13:52.841381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.761 [2024-06-09 23:13:52.841385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.761 [2024-06-09 23:13:52.841396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.761 qpair failed and we were unable to recover it. 00:31:24.761 [2024-06-09 23:13:52.851609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.761 [2024-06-09 23:13:52.851704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.761 [2024-06-09 23:13:52.851717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.761 [2024-06-09 23:13:52.851723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.761 [2024-06-09 23:13:52.851730] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.761 [2024-06-09 23:13:52.851742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.761 qpair failed and we were unable to recover it. 00:31:24.761 [2024-06-09 23:13:52.861472] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.761 [2024-06-09 23:13:52.861563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.761 [2024-06-09 23:13:52.861577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.761 [2024-06-09 23:13:52.861582] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.761 [2024-06-09 23:13:52.861586] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.761 [2024-06-09 23:13:52.861599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.761 qpair failed and we were unable to recover it. 00:31:24.761 [2024-06-09 23:13:52.871475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.761 [2024-06-09 23:13:52.871571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.761 [2024-06-09 23:13:52.871585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.761 [2024-06-09 23:13:52.871590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.761 [2024-06-09 23:13:52.871594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.761 [2024-06-09 23:13:52.871606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.761 qpair failed and we were unable to recover it. 00:31:24.761 [2024-06-09 23:13:52.881383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.761 [2024-06-09 23:13:52.881482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.761 [2024-06-09 23:13:52.881495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.761 [2024-06-09 23:13:52.881501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.761 [2024-06-09 23:13:52.881505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.761 [2024-06-09 23:13:52.881518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.761 qpair failed and we were unable to recover it. 00:31:24.761 [2024-06-09 23:13:52.891532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.761 [2024-06-09 23:13:52.891633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.761 [2024-06-09 23:13:52.891647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.761 [2024-06-09 23:13:52.891653] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.761 [2024-06-09 23:13:52.891657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.761 [2024-06-09 23:13:52.891669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.761 qpair failed and we were unable to recover it. 00:31:24.761 [2024-06-09 23:13:52.901557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.761 [2024-06-09 23:13:52.901677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.761 [2024-06-09 23:13:52.901691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.761 [2024-06-09 23:13:52.901697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.761 [2024-06-09 23:13:52.901701] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.761 [2024-06-09 23:13:52.901713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.761 qpair failed and we were unable to recover it. 00:31:24.761 [2024-06-09 23:13:52.911588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.761 [2024-06-09 23:13:52.911711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.761 [2024-06-09 23:13:52.911724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.761 [2024-06-09 23:13:52.911729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.761 [2024-06-09 23:13:52.911733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.761 [2024-06-09 23:13:52.911744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.761 qpair failed and we were unable to recover it. 00:31:24.761 [2024-06-09 23:13:52.921630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.761 [2024-06-09 23:13:52.921716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.761 [2024-06-09 23:13:52.921729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.761 [2024-06-09 23:13:52.921734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.761 [2024-06-09 23:13:52.921738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.761 [2024-06-09 23:13:52.921750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.761 qpair failed and we were unable to recover it. 00:31:24.761 [2024-06-09 23:13:52.931652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.761 [2024-06-09 23:13:52.931764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.761 [2024-06-09 23:13:52.931778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.761 [2024-06-09 23:13:52.931784] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.761 [2024-06-09 23:13:52.931788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:24.761 [2024-06-09 23:13:52.931799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.761 qpair failed and we were unable to recover it. 00:31:25.024 [2024-06-09 23:13:52.941704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.024 [2024-06-09 23:13:52.941797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.024 [2024-06-09 23:13:52.941811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.024 [2024-06-09 23:13:52.941819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.024 [2024-06-09 23:13:52.941824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.024 [2024-06-09 23:13:52.941835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.024 qpair failed and we were unable to recover it. 00:31:25.024 [2024-06-09 23:13:52.951910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.024 [2024-06-09 23:13:52.952012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.024 [2024-06-09 23:13:52.952026] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.024 [2024-06-09 23:13:52.952031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.024 [2024-06-09 23:13:52.952035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.024 [2024-06-09 23:13:52.952047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.024 qpair failed and we were unable to recover it. 00:31:25.024 [2024-06-09 23:13:52.961765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.024 [2024-06-09 23:13:52.961866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.024 [2024-06-09 23:13:52.961886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.024 [2024-06-09 23:13:52.961892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.024 [2024-06-09 23:13:52.961897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.024 [2024-06-09 23:13:52.961912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.024 qpair failed and we were unable to recover it. 00:31:25.024 [2024-06-09 23:13:52.971774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.024 [2024-06-09 23:13:52.971866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.024 [2024-06-09 23:13:52.971880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.024 [2024-06-09 23:13:52.971886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.024 [2024-06-09 23:13:52.971890] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.024 [2024-06-09 23:13:52.971903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.024 qpair failed and we were unable to recover it. 00:31:25.024 [2024-06-09 23:13:52.981679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.024 [2024-06-09 23:13:52.981774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.024 [2024-06-09 23:13:52.981789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.024 [2024-06-09 23:13:52.981794] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.024 [2024-06-09 23:13:52.981798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.024 [2024-06-09 23:13:52.981812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.024 qpair failed and we were unable to recover it. 00:31:25.024 [2024-06-09 23:13:52.991844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.024 [2024-06-09 23:13:52.991951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.024 [2024-06-09 23:13:52.991966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.024 [2024-06-09 23:13:52.991971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.024 [2024-06-09 23:13:52.991977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.024 [2024-06-09 23:13:52.991989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.024 qpair failed and we were unable to recover it. 00:31:25.024 [2024-06-09 23:13:53.001725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.024 [2024-06-09 23:13:53.001819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.024 [2024-06-09 23:13:53.001832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.024 [2024-06-09 23:13:53.001838] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.024 [2024-06-09 23:13:53.001842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.024 [2024-06-09 23:13:53.001854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.024 qpair failed and we were unable to recover it. 00:31:25.024 [2024-06-09 23:13:53.011881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.024 [2024-06-09 23:13:53.011974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.024 [2024-06-09 23:13:53.011988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.024 [2024-06-09 23:13:53.011993] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.024 [2024-06-09 23:13:53.011998] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.024 [2024-06-09 23:13:53.012009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.024 qpair failed and we were unable to recover it. 00:31:25.024 [2024-06-09 23:13:53.021923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.024 [2024-06-09 23:13:53.022022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.024 [2024-06-09 23:13:53.022042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.024 [2024-06-09 23:13:53.022049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.024 [2024-06-09 23:13:53.022053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.024 [2024-06-09 23:13:53.022070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.024 qpair failed and we were unable to recover it. 00:31:25.024 [2024-06-09 23:13:53.031940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.024 [2024-06-09 23:13:53.032038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.024 [2024-06-09 23:13:53.032058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.024 [2024-06-09 23:13:53.032068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.024 [2024-06-09 23:13:53.032073] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.024 [2024-06-09 23:13:53.032089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.024 qpair failed and we were unable to recover it. 00:31:25.024 [2024-06-09 23:13:53.041957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.024 [2024-06-09 23:13:53.042055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.024 [2024-06-09 23:13:53.042075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.024 [2024-06-09 23:13:53.042082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.024 [2024-06-09 23:13:53.042087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.024 [2024-06-09 23:13:53.042103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.024 qpair failed and we were unable to recover it. 00:31:25.024 [2024-06-09 23:13:53.051965] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.024 [2024-06-09 23:13:53.052061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.024 [2024-06-09 23:13:53.052075] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.024 [2024-06-09 23:13:53.052081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.024 [2024-06-09 23:13:53.052085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.024 [2024-06-09 23:13:53.052098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.024 qpair failed and we were unable to recover it. 00:31:25.024 [2024-06-09 23:13:53.062041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.024 [2024-06-09 23:13:53.062143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.024 [2024-06-09 23:13:53.062163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.024 [2024-06-09 23:13:53.062170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.024 [2024-06-09 23:13:53.062174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.025 [2024-06-09 23:13:53.062190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.025 qpair failed and we were unable to recover it. 00:31:25.025 [2024-06-09 23:13:53.072053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.025 [2024-06-09 23:13:53.072160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.025 [2024-06-09 23:13:53.072179] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.025 [2024-06-09 23:13:53.072186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.025 [2024-06-09 23:13:53.072190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.025 [2024-06-09 23:13:53.072206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.025 qpair failed and we were unable to recover it. 00:31:25.025 [2024-06-09 23:13:53.081986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.025 [2024-06-09 23:13:53.082088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.025 [2024-06-09 23:13:53.082108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.025 [2024-06-09 23:13:53.082115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.025 [2024-06-09 23:13:53.082120] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.025 [2024-06-09 23:13:53.082135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.025 qpair failed and we were unable to recover it. 00:31:25.025 [2024-06-09 23:13:53.092095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.025 [2024-06-09 23:13:53.092204] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.025 [2024-06-09 23:13:53.092224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.025 [2024-06-09 23:13:53.092231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.025 [2024-06-09 23:13:53.092236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.025 [2024-06-09 23:13:53.092251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.025 qpair failed and we were unable to recover it. 00:31:25.025 [2024-06-09 23:13:53.102167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.025 [2024-06-09 23:13:53.102265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.025 [2024-06-09 23:13:53.102285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.025 [2024-06-09 23:13:53.102292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.025 [2024-06-09 23:13:53.102297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.025 [2024-06-09 23:13:53.102313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.025 qpair failed and we were unable to recover it. 00:31:25.025 [2024-06-09 23:13:53.112168] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.025 [2024-06-09 23:13:53.112267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.025 [2024-06-09 23:13:53.112281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.025 [2024-06-09 23:13:53.112286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.025 [2024-06-09 23:13:53.112291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.025 [2024-06-09 23:13:53.112303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.025 qpair failed and we were unable to recover it. 00:31:25.025 [2024-06-09 23:13:53.122199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.025 [2024-06-09 23:13:53.122291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.025 [2024-06-09 23:13:53.122312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.025 [2024-06-09 23:13:53.122317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.025 [2024-06-09 23:13:53.122322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.025 [2024-06-09 23:13:53.122334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.025 qpair failed and we were unable to recover it. 00:31:25.025 [2024-06-09 23:13:53.132219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.025 [2024-06-09 23:13:53.132310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.025 [2024-06-09 23:13:53.132324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.025 [2024-06-09 23:13:53.132329] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.025 [2024-06-09 23:13:53.132333] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.025 [2024-06-09 23:13:53.132346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.025 qpair failed and we were unable to recover it. 00:31:25.025 [2024-06-09 23:13:53.142256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.025 [2024-06-09 23:13:53.142349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.025 [2024-06-09 23:13:53.142363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.025 [2024-06-09 23:13:53.142369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.025 [2024-06-09 23:13:53.142373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.025 [2024-06-09 23:13:53.142386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.025 qpair failed and we were unable to recover it. 00:31:25.025 [2024-06-09 23:13:53.152288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.025 [2024-06-09 23:13:53.152381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.025 [2024-06-09 23:13:53.152395] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.025 [2024-06-09 23:13:53.152405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.025 [2024-06-09 23:13:53.152410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.025 [2024-06-09 23:13:53.152422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.025 qpair failed and we were unable to recover it. 00:31:25.025 [2024-06-09 23:13:53.162335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.025 [2024-06-09 23:13:53.162448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.025 [2024-06-09 23:13:53.162462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.025 [2024-06-09 23:13:53.162468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.025 [2024-06-09 23:13:53.162472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.025 [2024-06-09 23:13:53.162489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.025 qpair failed and we were unable to recover it. 00:31:25.025 [2024-06-09 23:13:53.172309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.025 [2024-06-09 23:13:53.172443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.025 [2024-06-09 23:13:53.172456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.025 [2024-06-09 23:13:53.172462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.025 [2024-06-09 23:13:53.172467] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.025 [2024-06-09 23:13:53.172478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.025 qpair failed and we were unable to recover it. 00:31:25.025 [2024-06-09 23:13:53.182254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.025 [2024-06-09 23:13:53.182353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.025 [2024-06-09 23:13:53.182367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.025 [2024-06-09 23:13:53.182373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.025 [2024-06-09 23:13:53.182378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.025 [2024-06-09 23:13:53.182390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.025 qpair failed and we were unable to recover it. 00:31:25.025 [2024-06-09 23:13:53.192423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.025 [2024-06-09 23:13:53.192535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.026 [2024-06-09 23:13:53.192547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.026 [2024-06-09 23:13:53.192552] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.026 [2024-06-09 23:13:53.192557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.026 [2024-06-09 23:13:53.192567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.026 qpair failed and we were unable to recover it. 00:31:25.288 [2024-06-09 23:13:53.202412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.288 [2024-06-09 23:13:53.202511] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.288 [2024-06-09 23:13:53.202525] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.288 [2024-06-09 23:13:53.202531] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.288 [2024-06-09 23:13:53.202536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.288 [2024-06-09 23:13:53.202548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.288 qpair failed and we were unable to recover it. 00:31:25.288 [2024-06-09 23:13:53.212354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.288 [2024-06-09 23:13:53.212457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.288 [2024-06-09 23:13:53.212474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.288 [2024-06-09 23:13:53.212480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.288 [2024-06-09 23:13:53.212485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.288 [2024-06-09 23:13:53.212497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.288 qpair failed and we were unable to recover it. 00:31:25.288 [2024-06-09 23:13:53.222499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.288 [2024-06-09 23:13:53.222597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.288 [2024-06-09 23:13:53.222610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.288 [2024-06-09 23:13:53.222616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.288 [2024-06-09 23:13:53.222621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.288 [2024-06-09 23:13:53.222633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.288 qpair failed and we were unable to recover it. 00:31:25.288 [2024-06-09 23:13:53.232508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.288 [2024-06-09 23:13:53.232648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.288 [2024-06-09 23:13:53.232662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.288 [2024-06-09 23:13:53.232668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.288 [2024-06-09 23:13:53.232672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.288 [2024-06-09 23:13:53.232684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.288 qpair failed and we were unable to recover it. 00:31:25.288 [2024-06-09 23:13:53.242530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.288 [2024-06-09 23:13:53.242624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.288 [2024-06-09 23:13:53.242639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.288 [2024-06-09 23:13:53.242645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.288 [2024-06-09 23:13:53.242649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.288 [2024-06-09 23:13:53.242662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.288 qpair failed and we were unable to recover it. 00:31:25.288 [2024-06-09 23:13:53.252578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.288 [2024-06-09 23:13:53.252674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.288 [2024-06-09 23:13:53.252688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.288 [2024-06-09 23:13:53.252693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.288 [2024-06-09 23:13:53.252698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.288 [2024-06-09 23:13:53.252713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.288 qpair failed and we were unable to recover it. 00:31:25.288 [2024-06-09 23:13:53.262617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.288 [2024-06-09 23:13:53.262712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.288 [2024-06-09 23:13:53.262726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.288 [2024-06-09 23:13:53.262732] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.288 [2024-06-09 23:13:53.262736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.288 [2024-06-09 23:13:53.262748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.288 qpair failed and we were unable to recover it. 00:31:25.288 [2024-06-09 23:13:53.272526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.288 [2024-06-09 23:13:53.272623] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.288 [2024-06-09 23:13:53.272637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.288 [2024-06-09 23:13:53.272642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.288 [2024-06-09 23:13:53.272647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.289 [2024-06-09 23:13:53.272659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.289 qpair failed and we were unable to recover it. 00:31:25.289 [2024-06-09 23:13:53.282668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.289 [2024-06-09 23:13:53.282760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.289 [2024-06-09 23:13:53.282773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.289 [2024-06-09 23:13:53.282779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.289 [2024-06-09 23:13:53.282783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.289 [2024-06-09 23:13:53.282795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.289 qpair failed and we were unable to recover it. 00:31:25.289 [2024-06-09 23:13:53.292696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.289 [2024-06-09 23:13:53.292791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.289 [2024-06-09 23:13:53.292804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.289 [2024-06-09 23:13:53.292810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.289 [2024-06-09 23:13:53.292815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.289 [2024-06-09 23:13:53.292827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.289 qpair failed and we were unable to recover it. 00:31:25.289 [2024-06-09 23:13:53.302716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.289 [2024-06-09 23:13:53.302833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.289 [2024-06-09 23:13:53.302849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.289 [2024-06-09 23:13:53.302855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.289 [2024-06-09 23:13:53.302860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.289 [2024-06-09 23:13:53.302872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.289 qpair failed and we were unable to recover it. 00:31:25.289 [2024-06-09 23:13:53.312760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.289 [2024-06-09 23:13:53.312857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.289 [2024-06-09 23:13:53.312870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.289 [2024-06-09 23:13:53.312876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.289 [2024-06-09 23:13:53.312880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.289 [2024-06-09 23:13:53.312891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.289 qpair failed and we were unable to recover it. 00:31:25.289 [2024-06-09 23:13:53.322780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.289 [2024-06-09 23:13:53.322868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.289 [2024-06-09 23:13:53.322882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.289 [2024-06-09 23:13:53.322888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.289 [2024-06-09 23:13:53.322892] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.289 [2024-06-09 23:13:53.322904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.289 qpair failed and we were unable to recover it. 00:31:25.289 [2024-06-09 23:13:53.332790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.289 [2024-06-09 23:13:53.332884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.289 [2024-06-09 23:13:53.332897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.289 [2024-06-09 23:13:53.332903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.289 [2024-06-09 23:13:53.332907] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.289 [2024-06-09 23:13:53.332919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.289 qpair failed and we were unable to recover it. 00:31:25.289 [2024-06-09 23:13:53.342836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.289 [2024-06-09 23:13:53.342932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.289 [2024-06-09 23:13:53.342946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.289 [2024-06-09 23:13:53.342952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.289 [2024-06-09 23:13:53.342959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.289 [2024-06-09 23:13:53.342971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.289 qpair failed and we were unable to recover it. 00:31:25.289 [2024-06-09 23:13:53.352872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.289 [2024-06-09 23:13:53.352964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.289 [2024-06-09 23:13:53.352978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.289 [2024-06-09 23:13:53.352984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.289 [2024-06-09 23:13:53.352988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.289 [2024-06-09 23:13:53.353000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.289 qpair failed and we were unable to recover it. 00:31:25.289 [2024-06-09 23:13:53.362877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.289 [2024-06-09 23:13:53.362978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.289 [2024-06-09 23:13:53.362998] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.289 [2024-06-09 23:13:53.363005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.289 [2024-06-09 23:13:53.363010] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.289 [2024-06-09 23:13:53.363026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.289 qpair failed and we were unable to recover it. 00:31:25.289 [2024-06-09 23:13:53.372902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.289 [2024-06-09 23:13:53.372996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.289 [2024-06-09 23:13:53.373011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.289 [2024-06-09 23:13:53.373017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.289 [2024-06-09 23:13:53.373022] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.289 [2024-06-09 23:13:53.373035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.289 qpair failed and we were unable to recover it. 00:31:25.289 [2024-06-09 23:13:53.382967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.289 [2024-06-09 23:13:53.383066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.289 [2024-06-09 23:13:53.383086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.289 [2024-06-09 23:13:53.383093] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.289 [2024-06-09 23:13:53.383098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.289 [2024-06-09 23:13:53.383114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.289 qpair failed and we were unable to recover it. 00:31:25.289 [2024-06-09 23:13:53.392956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.289 [2024-06-09 23:13:53.393066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.289 [2024-06-09 23:13:53.393086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.289 [2024-06-09 23:13:53.393093] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.289 [2024-06-09 23:13:53.393098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.289 [2024-06-09 23:13:53.393113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.289 qpair failed and we were unable to recover it. 00:31:25.289 [2024-06-09 23:13:53.403026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.289 [2024-06-09 23:13:53.403168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.289 [2024-06-09 23:13:53.403188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.289 [2024-06-09 23:13:53.403196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.289 [2024-06-09 23:13:53.403201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.289 [2024-06-09 23:13:53.403216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.289 qpair failed and we were unable to recover it. 00:31:25.289 [2024-06-09 23:13:53.413019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.290 [2024-06-09 23:13:53.413114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.290 [2024-06-09 23:13:53.413135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.290 [2024-06-09 23:13:53.413142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.290 [2024-06-09 23:13:53.413146] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.290 [2024-06-09 23:13:53.413163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.290 qpair failed and we were unable to recover it. 00:31:25.290 [2024-06-09 23:13:53.423084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.290 [2024-06-09 23:13:53.423182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.290 [2024-06-09 23:13:53.423202] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.290 [2024-06-09 23:13:53.423209] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.290 [2024-06-09 23:13:53.423214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.290 [2024-06-09 23:13:53.423230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.290 qpair failed and we were unable to recover it. 00:31:25.290 [2024-06-09 23:13:53.433058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.290 [2024-06-09 23:13:53.433147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.290 [2024-06-09 23:13:53.433162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.290 [2024-06-09 23:13:53.433168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.290 [2024-06-09 23:13:53.433176] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.290 [2024-06-09 23:13:53.433189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.290 qpair failed and we were unable to recover it. 00:31:25.290 [2024-06-09 23:13:53.443092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.290 [2024-06-09 23:13:53.443186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.290 [2024-06-09 23:13:53.443200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.290 [2024-06-09 23:13:53.443206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.290 [2024-06-09 23:13:53.443211] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.290 [2024-06-09 23:13:53.443223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.290 qpair failed and we were unable to recover it. 00:31:25.290 [2024-06-09 23:13:53.453157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.290 [2024-06-09 23:13:53.453254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.290 [2024-06-09 23:13:53.453274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.290 [2024-06-09 23:13:53.453281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.290 [2024-06-09 23:13:53.453286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.290 [2024-06-09 23:13:53.453301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.290 qpair failed and we were unable to recover it. 00:31:25.290 [2024-06-09 23:13:53.463161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.290 [2024-06-09 23:13:53.463256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.290 [2024-06-09 23:13:53.463271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.290 [2024-06-09 23:13:53.463277] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.290 [2024-06-09 23:13:53.463282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.290 [2024-06-09 23:13:53.463296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.290 qpair failed and we were unable to recover it. 00:31:25.555 [2024-06-09 23:13:53.473179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.555 [2024-06-09 23:13:53.473265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.555 [2024-06-09 23:13:53.473279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.555 [2024-06-09 23:13:53.473285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.555 [2024-06-09 23:13:53.473290] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.555 [2024-06-09 23:13:53.473302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.555 qpair failed and we were unable to recover it. 00:31:25.555 [2024-06-09 23:13:53.483227] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.555 [2024-06-09 23:13:53.483317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.555 [2024-06-09 23:13:53.483332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.556 [2024-06-09 23:13:53.483338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.556 [2024-06-09 23:13:53.483342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.556 [2024-06-09 23:13:53.483354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.556 qpair failed and we were unable to recover it. 00:31:25.556 [2024-06-09 23:13:53.493158] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.556 [2024-06-09 23:13:53.493260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.556 [2024-06-09 23:13:53.493274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.556 [2024-06-09 23:13:53.493280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.556 [2024-06-09 23:13:53.493284] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.556 [2024-06-09 23:13:53.493297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.556 qpair failed and we were unable to recover it. 00:31:25.556 [2024-06-09 23:13:53.503166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.556 [2024-06-09 23:13:53.503269] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.556 [2024-06-09 23:13:53.503283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.556 [2024-06-09 23:13:53.503289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.556 [2024-06-09 23:13:53.503295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.556 [2024-06-09 23:13:53.503307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.556 qpair failed and we were unable to recover it. 00:31:25.556 [2024-06-09 23:13:53.513275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.556 [2024-06-09 23:13:53.513372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.556 [2024-06-09 23:13:53.513386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.556 [2024-06-09 23:13:53.513392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.556 [2024-06-09 23:13:53.513397] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.556 [2024-06-09 23:13:53.513415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.556 qpair failed and we were unable to recover it. 00:31:25.556 [2024-06-09 23:13:53.523305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.556 [2024-06-09 23:13:53.523399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.556 [2024-06-09 23:13:53.523418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.556 [2024-06-09 23:13:53.523427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.556 [2024-06-09 23:13:53.523431] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.556 [2024-06-09 23:13:53.523444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.556 qpair failed and we were unable to recover it. 00:31:25.556 [2024-06-09 23:13:53.533362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.556 [2024-06-09 23:13:53.533457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.556 [2024-06-09 23:13:53.533472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.556 [2024-06-09 23:13:53.533478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.556 [2024-06-09 23:13:53.533483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.556 [2024-06-09 23:13:53.533496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.556 qpair failed and we were unable to recover it. 00:31:25.556 [2024-06-09 23:13:53.543408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.556 [2024-06-09 23:13:53.543516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.556 [2024-06-09 23:13:53.543530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.556 [2024-06-09 23:13:53.543536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.556 [2024-06-09 23:13:53.543540] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.556 [2024-06-09 23:13:53.543555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.556 qpair failed and we were unable to recover it. 00:31:25.556 [2024-06-09 23:13:53.553252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.556 [2024-06-09 23:13:53.553342] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.556 [2024-06-09 23:13:53.553356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.556 [2024-06-09 23:13:53.553362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.556 [2024-06-09 23:13:53.553366] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.556 [2024-06-09 23:13:53.553378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.556 qpair failed and we were unable to recover it. 00:31:25.556 [2024-06-09 23:13:53.563455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.556 [2024-06-09 23:13:53.563549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.556 [2024-06-09 23:13:53.563563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.556 [2024-06-09 23:13:53.563569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.556 [2024-06-09 23:13:53.563573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.556 [2024-06-09 23:13:53.563586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.556 qpair failed and we were unable to recover it. 00:31:25.556 [2024-06-09 23:13:53.573459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.556 [2024-06-09 23:13:53.573558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.556 [2024-06-09 23:13:53.573572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.556 [2024-06-09 23:13:53.573577] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.556 [2024-06-09 23:13:53.573582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.556 [2024-06-09 23:13:53.573594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.556 qpair failed and we were unable to recover it. 00:31:25.556 [2024-06-09 23:13:53.583523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.556 [2024-06-09 23:13:53.583622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.556 [2024-06-09 23:13:53.583636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.556 [2024-06-09 23:13:53.583641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.556 [2024-06-09 23:13:53.583646] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.556 [2024-06-09 23:13:53.583658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.556 qpair failed and we were unable to recover it. 00:31:25.556 [2024-06-09 23:13:53.593470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.556 [2024-06-09 23:13:53.593566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.556 [2024-06-09 23:13:53.593580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.556 [2024-06-09 23:13:53.593586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.556 [2024-06-09 23:13:53.593590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.556 [2024-06-09 23:13:53.593603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.556 qpair failed and we were unable to recover it. 00:31:25.556 [2024-06-09 23:13:53.603553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.556 [2024-06-09 23:13:53.603644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.556 [2024-06-09 23:13:53.603658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.556 [2024-06-09 23:13:53.603664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.556 [2024-06-09 23:13:53.603668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.556 [2024-06-09 23:13:53.603681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.556 qpair failed and we were unable to recover it. 00:31:25.556 [2024-06-09 23:13:53.613437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.557 [2024-06-09 23:13:53.613524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.557 [2024-06-09 23:13:53.613541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.557 [2024-06-09 23:13:53.613547] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.557 [2024-06-09 23:13:53.613551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.557 [2024-06-09 23:13:53.613564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.557 qpair failed and we were unable to recover it. 00:31:25.557 [2024-06-09 23:13:53.623768] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.557 [2024-06-09 23:13:53.623919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.557 [2024-06-09 23:13:53.623933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.557 [2024-06-09 23:13:53.623939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.557 [2024-06-09 23:13:53.623943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.557 [2024-06-09 23:13:53.623956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.557 qpair failed and we were unable to recover it. 00:31:25.557 [2024-06-09 23:13:53.633639] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.557 [2024-06-09 23:13:53.633733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.557 [2024-06-09 23:13:53.633747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.557 [2024-06-09 23:13:53.633753] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.557 [2024-06-09 23:13:53.633757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.557 [2024-06-09 23:13:53.633770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.557 qpair failed and we were unable to recover it. 00:31:25.557 [2024-06-09 23:13:53.643690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.557 [2024-06-09 23:13:53.643780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.557 [2024-06-09 23:13:53.643795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.557 [2024-06-09 23:13:53.643801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.557 [2024-06-09 23:13:53.643806] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.557 [2024-06-09 23:13:53.643819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.557 qpair failed and we were unable to recover it. 00:31:25.557 [2024-06-09 23:13:53.653686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.557 [2024-06-09 23:13:53.653771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.557 [2024-06-09 23:13:53.653785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.557 [2024-06-09 23:13:53.653791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.557 [2024-06-09 23:13:53.653796] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.557 [2024-06-09 23:13:53.653808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.557 qpair failed and we were unable to recover it. 00:31:25.557 [2024-06-09 23:13:53.663706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.557 [2024-06-09 23:13:53.663840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.557 [2024-06-09 23:13:53.663854] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.557 [2024-06-09 23:13:53.663859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.557 [2024-06-09 23:13:53.663864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.557 [2024-06-09 23:13:53.663875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.557 qpair failed and we were unable to recover it. 00:31:25.557 [2024-06-09 23:13:53.673724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.557 [2024-06-09 23:13:53.673811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.557 [2024-06-09 23:13:53.673825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.557 [2024-06-09 23:13:53.673831] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.557 [2024-06-09 23:13:53.673836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.557 [2024-06-09 23:13:53.673848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.557 qpair failed and we were unable to recover it. 00:31:25.557 [2024-06-09 23:13:53.683767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.557 [2024-06-09 23:13:53.683852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.557 [2024-06-09 23:13:53.683866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.557 [2024-06-09 23:13:53.683872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.557 [2024-06-09 23:13:53.683876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.557 [2024-06-09 23:13:53.683888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.557 qpair failed and we were unable to recover it. 00:31:25.557 [2024-06-09 23:13:53.693748] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.557 [2024-06-09 23:13:53.693833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.557 [2024-06-09 23:13:53.693847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.557 [2024-06-09 23:13:53.693853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.557 [2024-06-09 23:13:53.693857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.557 [2024-06-09 23:13:53.693870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.557 qpair failed and we were unable to recover it. 00:31:25.557 [2024-06-09 23:13:53.703802] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.557 [2024-06-09 23:13:53.703888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.557 [2024-06-09 23:13:53.703905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.557 [2024-06-09 23:13:53.703911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.557 [2024-06-09 23:13:53.703915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.557 [2024-06-09 23:13:53.703928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.557 qpair failed and we were unable to recover it. 00:31:25.557 [2024-06-09 23:13:53.713743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.557 [2024-06-09 23:13:53.713839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.557 [2024-06-09 23:13:53.713853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.557 [2024-06-09 23:13:53.713860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.557 [2024-06-09 23:13:53.713864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.557 [2024-06-09 23:13:53.713877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.557 qpair failed and we were unable to recover it. 00:31:25.557 [2024-06-09 23:13:53.723900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.557 [2024-06-09 23:13:53.723989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.557 [2024-06-09 23:13:53.724003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.557 [2024-06-09 23:13:53.724009] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.557 [2024-06-09 23:13:53.724013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.557 [2024-06-09 23:13:53.724025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.557 qpair failed and we were unable to recover it. 00:31:25.819 [2024-06-09 23:13:53.733886] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.819 [2024-06-09 23:13:53.733974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.820 [2024-06-09 23:13:53.733988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.820 [2024-06-09 23:13:53.733994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.820 [2024-06-09 23:13:53.733999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.820 [2024-06-09 23:13:53.734013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.820 qpair failed and we were unable to recover it. 00:31:25.820 [2024-06-09 23:13:53.743933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.820 [2024-06-09 23:13:53.744027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.820 [2024-06-09 23:13:53.744041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.820 [2024-06-09 23:13:53.744047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.820 [2024-06-09 23:13:53.744051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.820 [2024-06-09 23:13:53.744067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.820 qpair failed and we were unable to recover it. 00:31:25.820 [2024-06-09 23:13:53.753855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.820 [2024-06-09 23:13:53.753953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.820 [2024-06-09 23:13:53.753975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.820 [2024-06-09 23:13:53.753983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.820 [2024-06-09 23:13:53.753990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.820 [2024-06-09 23:13:53.754006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.820 qpair failed and we were unable to recover it. 00:31:25.820 [2024-06-09 23:13:53.764008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.820 [2024-06-09 23:13:53.764101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.820 [2024-06-09 23:13:53.764117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.820 [2024-06-09 23:13:53.764123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.820 [2024-06-09 23:13:53.764128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.820 [2024-06-09 23:13:53.764141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.820 qpair failed and we were unable to recover it. 00:31:25.820 [2024-06-09 23:13:53.774042] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.820 [2024-06-09 23:13:53.774129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.820 [2024-06-09 23:13:53.774144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.820 [2024-06-09 23:13:53.774150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.820 [2024-06-09 23:13:53.774155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.820 [2024-06-09 23:13:53.774168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.820 qpair failed and we were unable to recover it. 00:31:25.820 [2024-06-09 23:13:53.784026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.820 [2024-06-09 23:13:53.784119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.820 [2024-06-09 23:13:53.784139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.820 [2024-06-09 23:13:53.784146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.820 [2024-06-09 23:13:53.784151] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.820 [2024-06-09 23:13:53.784166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.820 qpair failed and we were unable to recover it. 00:31:25.820 [2024-06-09 23:13:53.794027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.820 [2024-06-09 23:13:53.794142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.820 [2024-06-09 23:13:53.794166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.820 [2024-06-09 23:13:53.794172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.820 [2024-06-09 23:13:53.794177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.820 [2024-06-09 23:13:53.794192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.820 qpair failed and we were unable to recover it. 00:31:25.820 [2024-06-09 23:13:53.804117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.820 [2024-06-09 23:13:53.804209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.820 [2024-06-09 23:13:53.804229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.820 [2024-06-09 23:13:53.804235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.820 [2024-06-09 23:13:53.804241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.820 [2024-06-09 23:13:53.804255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.820 qpair failed and we were unable to recover it. 00:31:25.820 [2024-06-09 23:13:53.814072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.820 [2024-06-09 23:13:53.814163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.820 [2024-06-09 23:13:53.814184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.820 [2024-06-09 23:13:53.814190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.820 [2024-06-09 23:13:53.814196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.820 [2024-06-09 23:13:53.814211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.820 qpair failed and we were unable to recover it. 00:31:25.820 [2024-06-09 23:13:53.824138] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.820 [2024-06-09 23:13:53.824245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.820 [2024-06-09 23:13:53.824260] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.820 [2024-06-09 23:13:53.824266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.820 [2024-06-09 23:13:53.824270] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.820 [2024-06-09 23:13:53.824283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.820 qpair failed and we were unable to recover it. 00:31:25.820 [2024-06-09 23:13:53.834184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.820 [2024-06-09 23:13:53.834273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.820 [2024-06-09 23:13:53.834287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.820 [2024-06-09 23:13:53.834293] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.820 [2024-06-09 23:13:53.834304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.820 [2024-06-09 23:13:53.834317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.820 qpair failed and we were unable to recover it. 00:31:25.820 [2024-06-09 23:13:53.844217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.820 [2024-06-09 23:13:53.844311] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.820 [2024-06-09 23:13:53.844326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.820 [2024-06-09 23:13:53.844331] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.820 [2024-06-09 23:13:53.844336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.820 [2024-06-09 23:13:53.844349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.820 qpair failed and we were unable to recover it. 00:31:25.820 [2024-06-09 23:13:53.854180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.820 [2024-06-09 23:13:53.854265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.820 [2024-06-09 23:13:53.854279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.820 [2024-06-09 23:13:53.854285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.820 [2024-06-09 23:13:53.854290] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.820 [2024-06-09 23:13:53.854302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.820 qpair failed and we were unable to recover it. 00:31:25.820 [2024-06-09 23:13:53.864257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.820 [2024-06-09 23:13:53.864340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.820 [2024-06-09 23:13:53.864355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.820 [2024-06-09 23:13:53.864361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.820 [2024-06-09 23:13:53.864366] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.820 [2024-06-09 23:13:53.864379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.821 qpair failed and we were unable to recover it. 00:31:25.821 [2024-06-09 23:13:53.874271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.821 [2024-06-09 23:13:53.874398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.821 [2024-06-09 23:13:53.874417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.821 [2024-06-09 23:13:53.874423] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.821 [2024-06-09 23:13:53.874428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.821 [2024-06-09 23:13:53.874441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.821 qpair failed and we were unable to recover it. 00:31:25.821 [2024-06-09 23:13:53.884324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.821 [2024-06-09 23:13:53.884423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.821 [2024-06-09 23:13:53.884437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.821 [2024-06-09 23:13:53.884442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.821 [2024-06-09 23:13:53.884447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.821 [2024-06-09 23:13:53.884460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.821 qpair failed and we were unable to recover it. 00:31:25.821 [2024-06-09 23:13:53.894289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.821 [2024-06-09 23:13:53.894377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.821 [2024-06-09 23:13:53.894391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.821 [2024-06-09 23:13:53.894398] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.821 [2024-06-09 23:13:53.894408] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.821 [2024-06-09 23:13:53.894421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.821 qpair failed and we were unable to recover it. 00:31:25.821 [2024-06-09 23:13:53.904362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.821 [2024-06-09 23:13:53.904450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.821 [2024-06-09 23:13:53.904465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.821 [2024-06-09 23:13:53.904472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.821 [2024-06-09 23:13:53.904476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.821 [2024-06-09 23:13:53.904488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.821 qpair failed and we were unable to recover it. 00:31:25.821 [2024-06-09 23:13:53.914358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.821 [2024-06-09 23:13:53.914453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.821 [2024-06-09 23:13:53.914467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.821 [2024-06-09 23:13:53.914473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.821 [2024-06-09 23:13:53.914478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.821 [2024-06-09 23:13:53.914491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.821 qpair failed and we were unable to recover it. 00:31:25.821 [2024-06-09 23:13:53.924446] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.821 [2024-06-09 23:13:53.924539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.821 [2024-06-09 23:13:53.924552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.821 [2024-06-09 23:13:53.924558] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.821 [2024-06-09 23:13:53.924566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.821 [2024-06-09 23:13:53.924579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.821 qpair failed and we were unable to recover it. 00:31:25.821 [2024-06-09 23:13:53.934451] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.821 [2024-06-09 23:13:53.934536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.821 [2024-06-09 23:13:53.934550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.821 [2024-06-09 23:13:53.934556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.821 [2024-06-09 23:13:53.934560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.821 [2024-06-09 23:13:53.934573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.821 qpair failed and we were unable to recover it. 00:31:25.821 [2024-06-09 23:13:53.944479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.821 [2024-06-09 23:13:53.944569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.821 [2024-06-09 23:13:53.944582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.821 [2024-06-09 23:13:53.944588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.821 [2024-06-09 23:13:53.944593] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.821 [2024-06-09 23:13:53.944605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.821 qpair failed and we were unable to recover it. 00:31:25.821 [2024-06-09 23:13:53.954503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.821 [2024-06-09 23:13:53.954594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.821 [2024-06-09 23:13:53.954607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.821 [2024-06-09 23:13:53.954613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.821 [2024-06-09 23:13:53.954617] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.821 [2024-06-09 23:13:53.954630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.821 qpair failed and we were unable to recover it. 00:31:25.821 [2024-06-09 23:13:53.964593] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.821 [2024-06-09 23:13:53.964686] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.821 [2024-06-09 23:13:53.964701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.821 [2024-06-09 23:13:53.964707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.821 [2024-06-09 23:13:53.964712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.821 [2024-06-09 23:13:53.964725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.821 qpair failed and we were unable to recover it. 00:31:25.821 [2024-06-09 23:13:53.974535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.821 [2024-06-09 23:13:53.974633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.821 [2024-06-09 23:13:53.974646] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.821 [2024-06-09 23:13:53.974653] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.821 [2024-06-09 23:13:53.974658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.821 [2024-06-09 23:13:53.974670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.821 qpair failed and we were unable to recover it. 00:31:25.821 [2024-06-09 23:13:53.984579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.821 [2024-06-09 23:13:53.984666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.821 [2024-06-09 23:13:53.984679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.821 [2024-06-09 23:13:53.984685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.821 [2024-06-09 23:13:53.984690] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.821 [2024-06-09 23:13:53.984702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.821 qpair failed and we were unable to recover it. 00:31:25.821 [2024-06-09 23:13:53.994539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.821 [2024-06-09 23:13:53.994631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.821 [2024-06-09 23:13:53.994645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.821 [2024-06-09 23:13:53.994651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.821 [2024-06-09 23:13:53.994655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:25.821 [2024-06-09 23:13:53.994668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.821 qpair failed and we were unable to recover it. 00:31:26.083 [2024-06-09 23:13:54.004663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.083 [2024-06-09 23:13:54.004750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.083 [2024-06-09 23:13:54.004764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.083 [2024-06-09 23:13:54.004770] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.083 [2024-06-09 23:13:54.004775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.083 [2024-06-09 23:13:54.004787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.083 qpair failed and we were unable to recover it. 00:31:26.083 [2024-06-09 23:13:54.014670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.083 [2024-06-09 23:13:54.014757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.083 [2024-06-09 23:13:54.014771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.083 [2024-06-09 23:13:54.014781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.083 [2024-06-09 23:13:54.014785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.083 [2024-06-09 23:13:54.014797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.083 qpair failed and we were unable to recover it. 00:31:26.083 [2024-06-09 23:13:54.024698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.083 [2024-06-09 23:13:54.024782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.083 [2024-06-09 23:13:54.024796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.083 [2024-06-09 23:13:54.024802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.083 [2024-06-09 23:13:54.024807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.083 [2024-06-09 23:13:54.024820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.083 qpair failed and we were unable to recover it. 00:31:26.083 [2024-06-09 23:13:54.034676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.083 [2024-06-09 23:13:54.034766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.083 [2024-06-09 23:13:54.034780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.083 [2024-06-09 23:13:54.034785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.083 [2024-06-09 23:13:54.034790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.083 [2024-06-09 23:13:54.034802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.083 qpair failed and we were unable to recover it. 00:31:26.083 [2024-06-09 23:13:54.044739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.083 [2024-06-09 23:13:54.044829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.083 [2024-06-09 23:13:54.044843] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.083 [2024-06-09 23:13:54.044849] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.083 [2024-06-09 23:13:54.044854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.083 [2024-06-09 23:13:54.044866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.083 qpair failed and we were unable to recover it. 00:31:26.083 [2024-06-09 23:13:54.054641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.083 [2024-06-09 23:13:54.054727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.083 [2024-06-09 23:13:54.054741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.083 [2024-06-09 23:13:54.054748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.084 [2024-06-09 23:13:54.054752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.084 [2024-06-09 23:13:54.054764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.084 qpair failed and we were unable to recover it. 00:31:26.084 [2024-06-09 23:13:54.064764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.084 [2024-06-09 23:13:54.064850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.084 [2024-06-09 23:13:54.064864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.084 [2024-06-09 23:13:54.064870] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.084 [2024-06-09 23:13:54.064874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.084 [2024-06-09 23:13:54.064887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.084 qpair failed and we were unable to recover it. 00:31:26.084 [2024-06-09 23:13:54.074838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.084 [2024-06-09 23:13:54.074923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.084 [2024-06-09 23:13:54.074937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.084 [2024-06-09 23:13:54.074943] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.084 [2024-06-09 23:13:54.074948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.084 [2024-06-09 23:13:54.074960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.084 qpair failed and we were unable to recover it. 00:31:26.084 [2024-06-09 23:13:54.084881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.084 [2024-06-09 23:13:54.084969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.084 [2024-06-09 23:13:54.084983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.084 [2024-06-09 23:13:54.084989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.084 [2024-06-09 23:13:54.084994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.084 [2024-06-09 23:13:54.085006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.084 qpair failed and we were unable to recover it. 00:31:26.084 [2024-06-09 23:13:54.094831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.084 [2024-06-09 23:13:54.094918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.084 [2024-06-09 23:13:54.094933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.084 [2024-06-09 23:13:54.094939] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.084 [2024-06-09 23:13:54.094943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.084 [2024-06-09 23:13:54.094955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.084 qpair failed and we were unable to recover it. 00:31:26.084 [2024-06-09 23:13:54.104907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.084 [2024-06-09 23:13:54.105004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.084 [2024-06-09 23:13:54.105025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.084 [2024-06-09 23:13:54.105036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.084 [2024-06-09 23:13:54.105041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.084 [2024-06-09 23:13:54.105057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.084 qpair failed and we were unable to recover it. 00:31:26.084 [2024-06-09 23:13:54.114963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.084 [2024-06-09 23:13:54.115094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.084 [2024-06-09 23:13:54.115114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.084 [2024-06-09 23:13:54.115121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.084 [2024-06-09 23:13:54.115126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.084 [2024-06-09 23:13:54.115141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.084 qpair failed and we were unable to recover it. 00:31:26.084 [2024-06-09 23:13:54.124973] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.084 [2024-06-09 23:13:54.125072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.084 [2024-06-09 23:13:54.125092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.084 [2024-06-09 23:13:54.125099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.084 [2024-06-09 23:13:54.125104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.084 [2024-06-09 23:13:54.125119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.084 qpair failed and we were unable to recover it. 00:31:26.084 [2024-06-09 23:13:54.135034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.084 [2024-06-09 23:13:54.135166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.084 [2024-06-09 23:13:54.135186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.084 [2024-06-09 23:13:54.135193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.084 [2024-06-09 23:13:54.135198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.084 [2024-06-09 23:13:54.135214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.084 qpair failed and we were unable to recover it. 00:31:26.084 [2024-06-09 23:13:54.144999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.084 [2024-06-09 23:13:54.145089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.084 [2024-06-09 23:13:54.145109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.084 [2024-06-09 23:13:54.145116] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.084 [2024-06-09 23:13:54.145121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.084 [2024-06-09 23:13:54.145136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.084 qpair failed and we were unable to recover it. 00:31:26.084 [2024-06-09 23:13:54.155011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.084 [2024-06-09 23:13:54.155104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.084 [2024-06-09 23:13:54.155119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.084 [2024-06-09 23:13:54.155125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.084 [2024-06-09 23:13:54.155129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.084 [2024-06-09 23:13:54.155142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.084 qpair failed and we were unable to recover it. 00:31:26.084 [2024-06-09 23:13:54.165072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.084 [2024-06-09 23:13:54.165167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.084 [2024-06-09 23:13:54.165188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.084 [2024-06-09 23:13:54.165195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.084 [2024-06-09 23:13:54.165199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.084 [2024-06-09 23:13:54.165216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.084 qpair failed and we were unable to recover it. 00:31:26.084 [2024-06-09 23:13:54.175017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.084 [2024-06-09 23:13:54.175111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.084 [2024-06-09 23:13:54.175131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.084 [2024-06-09 23:13:54.175138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.084 [2024-06-09 23:13:54.175143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.084 [2024-06-09 23:13:54.175159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.084 qpair failed and we were unable to recover it. 00:31:26.084 [2024-06-09 23:13:54.185194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.084 [2024-06-09 23:13:54.185312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.084 [2024-06-09 23:13:54.185327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.084 [2024-06-09 23:13:54.185333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.084 [2024-06-09 23:13:54.185338] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.084 [2024-06-09 23:13:54.185350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.084 qpair failed and we were unable to recover it. 00:31:26.084 [2024-06-09 23:13:54.195207] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.084 [2024-06-09 23:13:54.195335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.085 [2024-06-09 23:13:54.195353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.085 [2024-06-09 23:13:54.195359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.085 [2024-06-09 23:13:54.195363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.085 [2024-06-09 23:13:54.195376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.085 qpair failed and we were unable to recover it. 00:31:26.085 [2024-06-09 23:13:54.205225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.085 [2024-06-09 23:13:54.205320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.085 [2024-06-09 23:13:54.205334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.085 [2024-06-09 23:13:54.205339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.085 [2024-06-09 23:13:54.205344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.085 [2024-06-09 23:13:54.205356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.085 qpair failed and we were unable to recover it. 00:31:26.085 [2024-06-09 23:13:54.215201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.085 [2024-06-09 23:13:54.215311] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.085 [2024-06-09 23:13:54.215325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.085 [2024-06-09 23:13:54.215331] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.085 [2024-06-09 23:13:54.215335] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.085 [2024-06-09 23:13:54.215347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.085 qpair failed and we were unable to recover it. 00:31:26.085 [2024-06-09 23:13:54.225215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.085 [2024-06-09 23:13:54.225305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.085 [2024-06-09 23:13:54.225319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.085 [2024-06-09 23:13:54.225325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.085 [2024-06-09 23:13:54.225330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.085 [2024-06-09 23:13:54.225342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.085 qpair failed and we were unable to recover it. 00:31:26.085 [2024-06-09 23:13:54.235286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.085 [2024-06-09 23:13:54.235423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.085 [2024-06-09 23:13:54.235438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.085 [2024-06-09 23:13:54.235444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.085 [2024-06-09 23:13:54.235448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.085 [2024-06-09 23:13:54.235464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.085 qpair failed and we were unable to recover it. 00:31:26.085 [2024-06-09 23:13:54.245358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.085 [2024-06-09 23:13:54.245468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.085 [2024-06-09 23:13:54.245482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.085 [2024-06-09 23:13:54.245488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.085 [2024-06-09 23:13:54.245493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.085 [2024-06-09 23:13:54.245505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.085 qpair failed and we were unable to recover it. 00:31:26.085 [2024-06-09 23:13:54.255320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.085 [2024-06-09 23:13:54.255415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.085 [2024-06-09 23:13:54.255429] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.085 [2024-06-09 23:13:54.255435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.085 [2024-06-09 23:13:54.255439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.085 [2024-06-09 23:13:54.255452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.085 qpair failed and we were unable to recover it. 00:31:26.347 [2024-06-09 23:13:54.265376] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.347 [2024-06-09 23:13:54.265493] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.347 [2024-06-09 23:13:54.265507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.347 [2024-06-09 23:13:54.265513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.347 [2024-06-09 23:13:54.265518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.347 [2024-06-09 23:13:54.265531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.347 qpair failed and we were unable to recover it. 00:31:26.347 [2024-06-09 23:13:54.275392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.347 [2024-06-09 23:13:54.275534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.347 [2024-06-09 23:13:54.275548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.347 [2024-06-09 23:13:54.275554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.347 [2024-06-09 23:13:54.275559] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.347 [2024-06-09 23:13:54.275571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.347 qpair failed and we were unable to recover it. 00:31:26.347 [2024-06-09 23:13:54.285390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.347 [2024-06-09 23:13:54.285529] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.347 [2024-06-09 23:13:54.285546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.347 [2024-06-09 23:13:54.285552] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.347 [2024-06-09 23:13:54.285556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.347 [2024-06-09 23:13:54.285568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.347 qpair failed and we were unable to recover it. 00:31:26.347 [2024-06-09 23:13:54.295484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.347 [2024-06-09 23:13:54.295604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.347 [2024-06-09 23:13:54.295618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.347 [2024-06-09 23:13:54.295624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.347 [2024-06-09 23:13:54.295629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.347 [2024-06-09 23:13:54.295641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.347 qpair failed and we were unable to recover it. 00:31:26.347 [2024-06-09 23:13:54.305453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.347 [2024-06-09 23:13:54.305565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.347 [2024-06-09 23:13:54.305578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.347 [2024-06-09 23:13:54.305584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.347 [2024-06-09 23:13:54.305589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.347 [2024-06-09 23:13:54.305601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.347 qpair failed and we were unable to recover it. 00:31:26.347 [2024-06-09 23:13:54.315473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.347 [2024-06-09 23:13:54.315564] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.347 [2024-06-09 23:13:54.315578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.347 [2024-06-09 23:13:54.315583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.347 [2024-06-09 23:13:54.315588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.347 [2024-06-09 23:13:54.315600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.347 qpair failed and we were unable to recover it. 00:31:26.347 [2024-06-09 23:13:54.325523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.347 [2024-06-09 23:13:54.325613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.347 [2024-06-09 23:13:54.325627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.347 [2024-06-09 23:13:54.325632] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.347 [2024-06-09 23:13:54.325637] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.347 [2024-06-09 23:13:54.325653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.347 qpair failed and we were unable to recover it. 00:31:26.347 [2024-06-09 23:13:54.335561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.347 [2024-06-09 23:13:54.335649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.347 [2024-06-09 23:13:54.335663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.347 [2024-06-09 23:13:54.335669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.347 [2024-06-09 23:13:54.335674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.347 [2024-06-09 23:13:54.335686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.347 qpair failed and we were unable to recover it. 00:31:26.347 [2024-06-09 23:13:54.345452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.347 [2024-06-09 23:13:54.345539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.347 [2024-06-09 23:13:54.345553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.347 [2024-06-09 23:13:54.345558] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.347 [2024-06-09 23:13:54.345563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.347 [2024-06-09 23:13:54.345576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.347 qpair failed and we were unable to recover it. 00:31:26.347 [2024-06-09 23:13:54.355630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.347 [2024-06-09 23:13:54.355722] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.347 [2024-06-09 23:13:54.355736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.347 [2024-06-09 23:13:54.355742] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.347 [2024-06-09 23:13:54.355747] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.347 [2024-06-09 23:13:54.355759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.347 qpair failed and we were unable to recover it. 00:31:26.347 [2024-06-09 23:13:54.365621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.347 [2024-06-09 23:13:54.365727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.347 [2024-06-09 23:13:54.365741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.347 [2024-06-09 23:13:54.365747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.347 [2024-06-09 23:13:54.365751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.347 [2024-06-09 23:13:54.365763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.347 qpair failed and we were unable to recover it. 00:31:26.347 [2024-06-09 23:13:54.375702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.347 [2024-06-09 23:13:54.375831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.347 [2024-06-09 23:13:54.375845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.347 [2024-06-09 23:13:54.375850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.347 [2024-06-09 23:13:54.375855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.347 [2024-06-09 23:13:54.375867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.347 qpair failed and we were unable to recover it. 00:31:26.347 [2024-06-09 23:13:54.385752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.348 [2024-06-09 23:13:54.385866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.348 [2024-06-09 23:13:54.385880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.348 [2024-06-09 23:13:54.385886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.348 [2024-06-09 23:13:54.385891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.348 [2024-06-09 23:13:54.385904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.348 qpair failed and we were unable to recover it. 00:31:26.348 [2024-06-09 23:13:54.395637] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.348 [2024-06-09 23:13:54.395727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.348 [2024-06-09 23:13:54.395741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.348 [2024-06-09 23:13:54.395748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.348 [2024-06-09 23:13:54.395752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.348 [2024-06-09 23:13:54.395764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.348 qpair failed and we were unable to recover it. 00:31:26.348 [2024-06-09 23:13:54.405761] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.348 [2024-06-09 23:13:54.405881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.348 [2024-06-09 23:13:54.405895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.348 [2024-06-09 23:13:54.405901] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.348 [2024-06-09 23:13:54.405905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.348 [2024-06-09 23:13:54.405918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.348 qpair failed and we were unable to recover it. 00:31:26.348 [2024-06-09 23:13:54.415785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.348 [2024-06-09 23:13:54.415869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.348 [2024-06-09 23:13:54.415883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.348 [2024-06-09 23:13:54.415889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.348 [2024-06-09 23:13:54.415897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.348 [2024-06-09 23:13:54.415909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.348 qpair failed and we were unable to recover it. 00:31:26.348 [2024-06-09 23:13:54.425805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.348 [2024-06-09 23:13:54.425890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.348 [2024-06-09 23:13:54.425902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.348 [2024-06-09 23:13:54.425908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.348 [2024-06-09 23:13:54.425913] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.348 [2024-06-09 23:13:54.425924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.348 qpair failed and we were unable to recover it. 00:31:26.348 [2024-06-09 23:13:54.435816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.348 [2024-06-09 23:13:54.435909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.348 [2024-06-09 23:13:54.435923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.348 [2024-06-09 23:13:54.435928] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.348 [2024-06-09 23:13:54.435933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.348 [2024-06-09 23:13:54.435945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.348 qpair failed and we were unable to recover it. 00:31:26.348 [2024-06-09 23:13:54.445847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.348 [2024-06-09 23:13:54.445953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.348 [2024-06-09 23:13:54.445973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.348 [2024-06-09 23:13:54.445980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.348 [2024-06-09 23:13:54.445985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.348 [2024-06-09 23:13:54.446001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.348 qpair failed and we were unable to recover it. 00:31:26.348 [2024-06-09 23:13:54.456069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.348 [2024-06-09 23:13:54.456159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.348 [2024-06-09 23:13:54.456174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.348 [2024-06-09 23:13:54.456180] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.348 [2024-06-09 23:13:54.456185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.348 [2024-06-09 23:13:54.456198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.348 qpair failed and we were unable to recover it. 00:31:26.348 [2024-06-09 23:13:54.465927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.348 [2024-06-09 23:13:54.466021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.348 [2024-06-09 23:13:54.466041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.348 [2024-06-09 23:13:54.466048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.348 [2024-06-09 23:13:54.466053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.348 [2024-06-09 23:13:54.466069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.348 qpair failed and we were unable to recover it. 00:31:26.348 [2024-06-09 23:13:54.475929] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.348 [2024-06-09 23:13:54.476030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.348 [2024-06-09 23:13:54.476050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.348 [2024-06-09 23:13:54.476057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.348 [2024-06-09 23:13:54.476062] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.348 [2024-06-09 23:13:54.476078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.348 qpair failed and we were unable to recover it. 00:31:26.348 [2024-06-09 23:13:54.485921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.348 [2024-06-09 23:13:54.486012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.348 [2024-06-09 23:13:54.486033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.348 [2024-06-09 23:13:54.486039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.348 [2024-06-09 23:13:54.486044] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.348 [2024-06-09 23:13:54.486060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.348 qpair failed and we were unable to recover it. 00:31:26.348 [2024-06-09 23:13:54.495968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.348 [2024-06-09 23:13:54.496062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.348 [2024-06-09 23:13:54.496082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.348 [2024-06-09 23:13:54.496089] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.348 [2024-06-09 23:13:54.496094] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.348 [2024-06-09 23:13:54.496110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.348 qpair failed and we were unable to recover it. 00:31:26.348 [2024-06-09 23:13:54.506198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.348 [2024-06-09 23:13:54.506289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.348 [2024-06-09 23:13:54.506309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.348 [2024-06-09 23:13:54.506319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.348 [2024-06-09 23:13:54.506324] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.348 [2024-06-09 23:13:54.506340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.348 qpair failed and we were unable to recover it. 00:31:26.348 [2024-06-09 23:13:54.516020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.348 [2024-06-09 23:13:54.516108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.348 [2024-06-09 23:13:54.516123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.348 [2024-06-09 23:13:54.516130] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.348 [2024-06-09 23:13:54.516134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.349 [2024-06-09 23:13:54.516148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.349 qpair failed and we were unable to recover it. 00:31:26.610 [2024-06-09 23:13:54.526052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.610 [2024-06-09 23:13:54.526142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.610 [2024-06-09 23:13:54.526162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.610 [2024-06-09 23:13:54.526169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.610 [2024-06-09 23:13:54.526174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.610 [2024-06-09 23:13:54.526190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.610 qpair failed and we were unable to recover it. 00:31:26.610 [2024-06-09 23:13:54.536045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.610 [2024-06-09 23:13:54.536134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.610 [2024-06-09 23:13:54.536149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.610 [2024-06-09 23:13:54.536155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.610 [2024-06-09 23:13:54.536160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.610 [2024-06-09 23:13:54.536176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.610 qpair failed and we were unable to recover it. 00:31:26.610 [2024-06-09 23:13:54.546128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.610 [2024-06-09 23:13:54.546214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.610 [2024-06-09 23:13:54.546229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.610 [2024-06-09 23:13:54.546235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.610 [2024-06-09 23:13:54.546240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.610 [2024-06-09 23:13:54.546253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.610 qpair failed and we were unable to recover it. 00:31:26.610 [2024-06-09 23:13:54.556270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.610 [2024-06-09 23:13:54.556367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.610 [2024-06-09 23:13:54.556387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.610 [2024-06-09 23:13:54.556394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.610 [2024-06-09 23:13:54.556400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.610 [2024-06-09 23:13:54.556424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.610 qpair failed and we were unable to recover it. 00:31:26.610 [2024-06-09 23:13:54.566165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.610 [2024-06-09 23:13:54.566254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.610 [2024-06-09 23:13:54.566270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.610 [2024-06-09 23:13:54.566276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.610 [2024-06-09 23:13:54.566280] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.610 [2024-06-09 23:13:54.566294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.610 qpair failed and we were unable to recover it. 00:31:26.610 [2024-06-09 23:13:54.576190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.610 [2024-06-09 23:13:54.576279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.610 [2024-06-09 23:13:54.576300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.610 [2024-06-09 23:13:54.576306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.610 [2024-06-09 23:13:54.576311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.610 [2024-06-09 23:13:54.576327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.610 qpair failed and we were unable to recover it. 00:31:26.610 [2024-06-09 23:13:54.586203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.610 [2024-06-09 23:13:54.586316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.610 [2024-06-09 23:13:54.586331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.610 [2024-06-09 23:13:54.586337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.610 [2024-06-09 23:13:54.586341] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.610 [2024-06-09 23:13:54.586354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.610 qpair failed and we were unable to recover it. 00:31:26.610 [2024-06-09 23:13:54.596269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.610 [2024-06-09 23:13:54.596412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.610 [2024-06-09 23:13:54.596426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.610 [2024-06-09 23:13:54.596435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.610 [2024-06-09 23:13:54.596440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.610 [2024-06-09 23:13:54.596453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.610 qpair failed and we were unable to recover it. 00:31:26.610 [2024-06-09 23:13:54.606242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.610 [2024-06-09 23:13:54.606330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.610 [2024-06-09 23:13:54.606343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.610 [2024-06-09 23:13:54.606350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.610 [2024-06-09 23:13:54.606354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.610 [2024-06-09 23:13:54.606367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.610 qpair failed and we were unable to recover it. 00:31:26.610 [2024-06-09 23:13:54.616307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.610 [2024-06-09 23:13:54.616400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.610 [2024-06-09 23:13:54.616419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.610 [2024-06-09 23:13:54.616425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.610 [2024-06-09 23:13:54.616430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.610 [2024-06-09 23:13:54.616444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.610 qpair failed and we were unable to recover it. 00:31:26.610 [2024-06-09 23:13:54.626351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.610 [2024-06-09 23:13:54.626451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.611 [2024-06-09 23:13:54.626466] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.611 [2024-06-09 23:13:54.626472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.611 [2024-06-09 23:13:54.626477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.611 [2024-06-09 23:13:54.626489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.611 qpair failed and we were unable to recover it. 00:31:26.611 [2024-06-09 23:13:54.636232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.611 [2024-06-09 23:13:54.636320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.611 [2024-06-09 23:13:54.636333] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.611 [2024-06-09 23:13:54.636340] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.611 [2024-06-09 23:13:54.636344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.611 [2024-06-09 23:13:54.636356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.611 qpair failed and we were unable to recover it. 00:31:26.611 [2024-06-09 23:13:54.646265] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.611 [2024-06-09 23:13:54.646354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.611 [2024-06-09 23:13:54.646369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.611 [2024-06-09 23:13:54.646375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.611 [2024-06-09 23:13:54.646379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.611 [2024-06-09 23:13:54.646392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.611 qpair failed and we were unable to recover it. 00:31:26.611 [2024-06-09 23:13:54.656374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.611 [2024-06-09 23:13:54.656464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.611 [2024-06-09 23:13:54.656478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.611 [2024-06-09 23:13:54.656484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.611 [2024-06-09 23:13:54.656488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.611 [2024-06-09 23:13:54.656500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.611 qpair failed and we were unable to recover it. 00:31:26.611 [2024-06-09 23:13:54.666318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.611 [2024-06-09 23:13:54.666408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.611 [2024-06-09 23:13:54.666421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.611 [2024-06-09 23:13:54.666427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.611 [2024-06-09 23:13:54.666432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.611 [2024-06-09 23:13:54.666446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.611 qpair failed and we were unable to recover it. 00:31:26.611 [2024-06-09 23:13:54.676494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.611 [2024-06-09 23:13:54.676591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.611 [2024-06-09 23:13:54.676606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.611 [2024-06-09 23:13:54.676612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.611 [2024-06-09 23:13:54.676616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.611 [2024-06-09 23:13:54.676628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.611 qpair failed and we were unable to recover it. 00:31:26.611 [2024-06-09 23:13:54.686479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.611 [2024-06-09 23:13:54.686566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.611 [2024-06-09 23:13:54.686582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.611 [2024-06-09 23:13:54.686588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.611 [2024-06-09 23:13:54.686593] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.611 [2024-06-09 23:13:54.686605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.611 qpair failed and we were unable to recover it. 00:31:26.611 [2024-06-09 23:13:54.696526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.611 [2024-06-09 23:13:54.696613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.611 [2024-06-09 23:13:54.696628] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.611 [2024-06-09 23:13:54.696634] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.611 [2024-06-09 23:13:54.696638] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.611 [2024-06-09 23:13:54.696651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.611 qpair failed and we were unable to recover it. 00:31:26.611 [2024-06-09 23:13:54.706539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.611 [2024-06-09 23:13:54.706624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.611 [2024-06-09 23:13:54.706638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.611 [2024-06-09 23:13:54.706644] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.611 [2024-06-09 23:13:54.706649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.611 [2024-06-09 23:13:54.706662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.611 qpair failed and we were unable to recover it. 00:31:26.611 [2024-06-09 23:13:54.716584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.611 [2024-06-09 23:13:54.716680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.611 [2024-06-09 23:13:54.716694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.611 [2024-06-09 23:13:54.716699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.611 [2024-06-09 23:13:54.716704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.611 [2024-06-09 23:13:54.716717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.611 qpair failed and we were unable to recover it. 00:31:26.611 [2024-06-09 23:13:54.726600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.611 [2024-06-09 23:13:54.726706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.611 [2024-06-09 23:13:54.726719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.611 [2024-06-09 23:13:54.726725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.611 [2024-06-09 23:13:54.726730] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.611 [2024-06-09 23:13:54.726745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.611 qpair failed and we were unable to recover it. 00:31:26.611 [2024-06-09 23:13:54.736634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.611 [2024-06-09 23:13:54.736720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.611 [2024-06-09 23:13:54.736734] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.611 [2024-06-09 23:13:54.736740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.611 [2024-06-09 23:13:54.736744] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.611 [2024-06-09 23:13:54.736756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.611 qpair failed and we were unable to recover it. 00:31:26.611 [2024-06-09 23:13:54.746724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.611 [2024-06-09 23:13:54.746836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.611 [2024-06-09 23:13:54.746850] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.611 [2024-06-09 23:13:54.746856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.611 [2024-06-09 23:13:54.746861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.611 [2024-06-09 23:13:54.746873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.611 qpair failed and we were unable to recover it. 00:31:26.611 [2024-06-09 23:13:54.756704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.611 [2024-06-09 23:13:54.756798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.611 [2024-06-09 23:13:54.756812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.611 [2024-06-09 23:13:54.756818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.611 [2024-06-09 23:13:54.756822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.611 [2024-06-09 23:13:54.756834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.611 qpair failed and we were unable to recover it. 00:31:26.612 [2024-06-09 23:13:54.766713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.612 [2024-06-09 23:13:54.766798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.612 [2024-06-09 23:13:54.766811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.612 [2024-06-09 23:13:54.766818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.612 [2024-06-09 23:13:54.766822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.612 [2024-06-09 23:13:54.766835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.612 qpair failed and we were unable to recover it. 00:31:26.612 [2024-06-09 23:13:54.776713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.612 [2024-06-09 23:13:54.776803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.612 [2024-06-09 23:13:54.776820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.612 [2024-06-09 23:13:54.776825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.612 [2024-06-09 23:13:54.776830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.612 [2024-06-09 23:13:54.776842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.612 qpair failed and we were unable to recover it. 00:31:26.612 [2024-06-09 23:13:54.786744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.612 [2024-06-09 23:13:54.786828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.612 [2024-06-09 23:13:54.786841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.612 [2024-06-09 23:13:54.786847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.612 [2024-06-09 23:13:54.786852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.612 [2024-06-09 23:13:54.786863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.612 qpair failed and we were unable to recover it. 00:31:26.873 [2024-06-09 23:13:54.796803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.873 [2024-06-09 23:13:54.796895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.873 [2024-06-09 23:13:54.796909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.873 [2024-06-09 23:13:54.796914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.873 [2024-06-09 23:13:54.796919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.873 [2024-06-09 23:13:54.796931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.873 qpair failed and we were unable to recover it. 00:31:26.873 [2024-06-09 23:13:54.806823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.874 [2024-06-09 23:13:54.806907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.874 [2024-06-09 23:13:54.806921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.874 [2024-06-09 23:13:54.806927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.874 [2024-06-09 23:13:54.806932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.874 [2024-06-09 23:13:54.806944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.874 qpair failed and we were unable to recover it. 00:31:26.874 [2024-06-09 23:13:54.816853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.874 [2024-06-09 23:13:54.816948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.874 [2024-06-09 23:13:54.816969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.874 [2024-06-09 23:13:54.816976] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.874 [2024-06-09 23:13:54.816981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.874 [2024-06-09 23:13:54.817001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.874 qpair failed and we were unable to recover it. 00:31:26.874 [2024-06-09 23:13:54.826906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.874 [2024-06-09 23:13:54.826999] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.874 [2024-06-09 23:13:54.827019] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.874 [2024-06-09 23:13:54.827026] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.874 [2024-06-09 23:13:54.827031] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.874 [2024-06-09 23:13:54.827047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.874 qpair failed and we were unable to recover it. 00:31:26.874 [2024-06-09 23:13:54.836914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.874 [2024-06-09 23:13:54.837010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.874 [2024-06-09 23:13:54.837030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.874 [2024-06-09 23:13:54.837037] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.874 [2024-06-09 23:13:54.837041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.874 [2024-06-09 23:13:54.837057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.874 qpair failed and we were unable to recover it. 00:31:26.874 [2024-06-09 23:13:54.846927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.874 [2024-06-09 23:13:54.847017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.874 [2024-06-09 23:13:54.847038] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.874 [2024-06-09 23:13:54.847044] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.874 [2024-06-09 23:13:54.847049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.874 [2024-06-09 23:13:54.847065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.874 qpair failed and we were unable to recover it. 00:31:26.874 [2024-06-09 23:13:54.856927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.874 [2024-06-09 23:13:54.857059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.874 [2024-06-09 23:13:54.857074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.874 [2024-06-09 23:13:54.857080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.874 [2024-06-09 23:13:54.857085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.874 [2024-06-09 23:13:54.857098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.874 qpair failed and we were unable to recover it. 00:31:26.874 [2024-06-09 23:13:54.866994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.874 [2024-06-09 23:13:54.867093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.874 [2024-06-09 23:13:54.867117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.874 [2024-06-09 23:13:54.867124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.874 [2024-06-09 23:13:54.867129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.874 [2024-06-09 23:13:54.867145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.874 qpair failed and we were unable to recover it. 00:31:26.874 [2024-06-09 23:13:54.877020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.874 [2024-06-09 23:13:54.877119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.874 [2024-06-09 23:13:54.877140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.874 [2024-06-09 23:13:54.877147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.874 [2024-06-09 23:13:54.877152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.874 [2024-06-09 23:13:54.877167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.874 qpair failed and we were unable to recover it. 00:31:26.874 [2024-06-09 23:13:54.886963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.874 [2024-06-09 23:13:54.887054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.874 [2024-06-09 23:13:54.887070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.874 [2024-06-09 23:13:54.887076] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.874 [2024-06-09 23:13:54.887080] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.874 [2024-06-09 23:13:54.887093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.874 qpair failed and we were unable to recover it. 00:31:26.874 [2024-06-09 23:13:54.897073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.874 [2024-06-09 23:13:54.897205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.874 [2024-06-09 23:13:54.897220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.874 [2024-06-09 23:13:54.897226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.874 [2024-06-09 23:13:54.897231] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.874 [2024-06-09 23:13:54.897244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.874 qpair failed and we were unable to recover it. 00:31:26.874 [2024-06-09 23:13:54.907108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.874 [2024-06-09 23:13:54.907206] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.874 [2024-06-09 23:13:54.907220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.874 [2024-06-09 23:13:54.907226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.874 [2024-06-09 23:13:54.907235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.874 [2024-06-09 23:13:54.907247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.874 qpair failed and we were unable to recover it. 00:31:26.874 [2024-06-09 23:13:54.917141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.874 [2024-06-09 23:13:54.917227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.874 [2024-06-09 23:13:54.917241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.874 [2024-06-09 23:13:54.917248] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.874 [2024-06-09 23:13:54.917253] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.874 [2024-06-09 23:13:54.917265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.874 qpair failed and we were unable to recover it. 00:31:26.874 [2024-06-09 23:13:54.927032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.874 [2024-06-09 23:13:54.927152] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.874 [2024-06-09 23:13:54.927168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.874 [2024-06-09 23:13:54.927174] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.874 [2024-06-09 23:13:54.927179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.874 [2024-06-09 23:13:54.927191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.874 qpair failed and we were unable to recover it. 00:31:26.874 [2024-06-09 23:13:54.937083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.874 [2024-06-09 23:13:54.937174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.874 [2024-06-09 23:13:54.937188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.874 [2024-06-09 23:13:54.937194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.874 [2024-06-09 23:13:54.937199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.875 [2024-06-09 23:13:54.937211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.875 qpair failed and we were unable to recover it. 00:31:26.875 [2024-06-09 23:13:54.947228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.875 [2024-06-09 23:13:54.947318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.875 [2024-06-09 23:13:54.947332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.875 [2024-06-09 23:13:54.947339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.875 [2024-06-09 23:13:54.947344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.875 [2024-06-09 23:13:54.947356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.875 qpair failed and we were unable to recover it. 00:31:26.875 [2024-06-09 23:13:54.957246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.875 [2024-06-09 23:13:54.957341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.875 [2024-06-09 23:13:54.957355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.875 [2024-06-09 23:13:54.957361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.875 [2024-06-09 23:13:54.957365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.875 [2024-06-09 23:13:54.957377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.875 qpair failed and we were unable to recover it. 00:31:26.875 [2024-06-09 23:13:54.967204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.875 [2024-06-09 23:13:54.967284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.875 [2024-06-09 23:13:54.967297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.875 [2024-06-09 23:13:54.967304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.875 [2024-06-09 23:13:54.967309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.875 [2024-06-09 23:13:54.967321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.875 qpair failed and we were unable to recover it. 00:31:26.875 [2024-06-09 23:13:54.977255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.875 [2024-06-09 23:13:54.977338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.875 [2024-06-09 23:13:54.977351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.875 [2024-06-09 23:13:54.977358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.875 [2024-06-09 23:13:54.977362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.875 [2024-06-09 23:13:54.977374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.875 qpair failed and we were unable to recover it. 00:31:26.875 [2024-06-09 23:13:54.987324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.875 [2024-06-09 23:13:54.987412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.875 [2024-06-09 23:13:54.987426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.875 [2024-06-09 23:13:54.987431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.875 [2024-06-09 23:13:54.987436] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.875 [2024-06-09 23:13:54.987448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.875 qpair failed and we were unable to recover it. 00:31:26.875 [2024-06-09 23:13:54.997255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.875 [2024-06-09 23:13:54.997344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.875 [2024-06-09 23:13:54.997359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.875 [2024-06-09 23:13:54.997365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.875 [2024-06-09 23:13:54.997372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.875 [2024-06-09 23:13:54.997385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.875 qpair failed and we were unable to recover it. 00:31:26.875 [2024-06-09 23:13:55.007344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.875 [2024-06-09 23:13:55.007433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.875 [2024-06-09 23:13:55.007447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.875 [2024-06-09 23:13:55.007453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.875 [2024-06-09 23:13:55.007458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.875 [2024-06-09 23:13:55.007470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.875 qpair failed and we were unable to recover it. 00:31:26.875 [2024-06-09 23:13:55.017372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.875 [2024-06-09 23:13:55.017463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.875 [2024-06-09 23:13:55.017477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.875 [2024-06-09 23:13:55.017483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.875 [2024-06-09 23:13:55.017488] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.875 [2024-06-09 23:13:55.017500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.875 qpair failed and we were unable to recover it. 00:31:26.875 [2024-06-09 23:13:55.027447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.875 [2024-06-09 23:13:55.027530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.875 [2024-06-09 23:13:55.027544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.875 [2024-06-09 23:13:55.027550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.875 [2024-06-09 23:13:55.027554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.875 [2024-06-09 23:13:55.027566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.875 qpair failed and we were unable to recover it. 00:31:26.875 [2024-06-09 23:13:55.037448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.875 [2024-06-09 23:13:55.037537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.875 [2024-06-09 23:13:55.037551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.875 [2024-06-09 23:13:55.037557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.875 [2024-06-09 23:13:55.037561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.875 [2024-06-09 23:13:55.037573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.875 qpair failed and we were unable to recover it. 00:31:26.875 [2024-06-09 23:13:55.047461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.875 [2024-06-09 23:13:55.047543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.875 [2024-06-09 23:13:55.047557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.875 [2024-06-09 23:13:55.047563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.875 [2024-06-09 23:13:55.047567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:26.875 [2024-06-09 23:13:55.047579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.875 qpair failed and we were unable to recover it. 00:31:27.137 [2024-06-09 23:13:55.057515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.137 [2024-06-09 23:13:55.057602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.137 [2024-06-09 23:13:55.057615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.137 [2024-06-09 23:13:55.057620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.137 [2024-06-09 23:13:55.057625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.137 [2024-06-09 23:13:55.057640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.137 qpair failed and we were unable to recover it. 00:31:27.137 [2024-06-09 23:13:55.067554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.137 [2024-06-09 23:13:55.067688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.137 [2024-06-09 23:13:55.067702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.137 [2024-06-09 23:13:55.067708] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.137 [2024-06-09 23:13:55.067713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.137 [2024-06-09 23:13:55.067725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.137 qpair failed and we were unable to recover it. 00:31:27.137 [2024-06-09 23:13:55.077592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.137 [2024-06-09 23:13:55.077679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.137 [2024-06-09 23:13:55.077692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.137 [2024-06-09 23:13:55.077698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.137 [2024-06-09 23:13:55.077703] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.137 [2024-06-09 23:13:55.077715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.138 qpair failed and we were unable to recover it. 00:31:27.138 [2024-06-09 23:13:55.087596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.138 [2024-06-09 23:13:55.087686] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.138 [2024-06-09 23:13:55.087700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.138 [2024-06-09 23:13:55.087708] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.138 [2024-06-09 23:13:55.087713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.138 [2024-06-09 23:13:55.087725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.138 qpair failed and we were unable to recover it. 00:31:27.138 [2024-06-09 23:13:55.097647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.138 [2024-06-09 23:13:55.097729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.138 [2024-06-09 23:13:55.097743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.138 [2024-06-09 23:13:55.097749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.138 [2024-06-09 23:13:55.097753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.138 [2024-06-09 23:13:55.097765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.138 qpair failed and we were unable to recover it. 00:31:27.138 [2024-06-09 23:13:55.107686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.138 [2024-06-09 23:13:55.107771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.138 [2024-06-09 23:13:55.107784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.138 [2024-06-09 23:13:55.107790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.138 [2024-06-09 23:13:55.107795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.138 [2024-06-09 23:13:55.107807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.138 qpair failed and we were unable to recover it. 00:31:27.138 [2024-06-09 23:13:55.117668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.138 [2024-06-09 23:13:55.117755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.138 [2024-06-09 23:13:55.117769] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.138 [2024-06-09 23:13:55.117774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.138 [2024-06-09 23:13:55.117779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.138 [2024-06-09 23:13:55.117790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.138 qpair failed and we were unable to recover it. 00:31:27.138 [2024-06-09 23:13:55.127732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.138 [2024-06-09 23:13:55.127823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.138 [2024-06-09 23:13:55.127836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.138 [2024-06-09 23:13:55.127842] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.138 [2024-06-09 23:13:55.127846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.138 [2024-06-09 23:13:55.127859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.138 qpair failed and we were unable to recover it. 00:31:27.138 [2024-06-09 23:13:55.137687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.138 [2024-06-09 23:13:55.137782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.138 [2024-06-09 23:13:55.137802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.138 [2024-06-09 23:13:55.137808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.138 [2024-06-09 23:13:55.137813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.138 [2024-06-09 23:13:55.137829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.138 qpair failed and we were unable to recover it. 00:31:27.138 [2024-06-09 23:13:55.147792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.138 [2024-06-09 23:13:55.147884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.138 [2024-06-09 23:13:55.147904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.138 [2024-06-09 23:13:55.147911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.138 [2024-06-09 23:13:55.147915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.138 [2024-06-09 23:13:55.147931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.138 qpair failed and we were unable to recover it. 00:31:27.138 [2024-06-09 23:13:55.157846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.138 [2024-06-09 23:13:55.157952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.138 [2024-06-09 23:13:55.157967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.138 [2024-06-09 23:13:55.157973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.138 [2024-06-09 23:13:55.157977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.138 [2024-06-09 23:13:55.157990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.138 qpair failed and we were unable to recover it. 00:31:27.138 [2024-06-09 23:13:55.167862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.138 [2024-06-09 23:13:55.168004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.138 [2024-06-09 23:13:55.168024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.138 [2024-06-09 23:13:55.168031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.138 [2024-06-09 23:13:55.168036] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.138 [2024-06-09 23:13:55.168051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.138 qpair failed and we were unable to recover it. 00:31:27.138 [2024-06-09 23:13:55.177812] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.138 [2024-06-09 23:13:55.177900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.138 [2024-06-09 23:13:55.177923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.138 [2024-06-09 23:13:55.177930] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.138 [2024-06-09 23:13:55.177935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.138 [2024-06-09 23:13:55.177951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.138 qpair failed and we were unable to recover it. 00:31:27.138 [2024-06-09 23:13:55.187752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.138 [2024-06-09 23:13:55.187841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.138 [2024-06-09 23:13:55.187857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.138 [2024-06-09 23:13:55.187863] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.138 [2024-06-09 23:13:55.187867] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.138 [2024-06-09 23:13:55.187881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.138 qpair failed and we were unable to recover it. 00:31:27.138 [2024-06-09 23:13:55.197792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.138 [2024-06-09 23:13:55.197885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.138 [2024-06-09 23:13:55.197899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.138 [2024-06-09 23:13:55.197906] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.138 [2024-06-09 23:13:55.197910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.138 [2024-06-09 23:13:55.197922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.138 qpair failed and we were unable to recover it. 00:31:27.138 [2024-06-09 23:13:55.207944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.138 [2024-06-09 23:13:55.208034] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.138 [2024-06-09 23:13:55.208054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.138 [2024-06-09 23:13:55.208060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.138 [2024-06-09 23:13:55.208065] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.138 [2024-06-09 23:13:55.208081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.138 qpair failed and we were unable to recover it. 00:31:27.138 [2024-06-09 23:13:55.217832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.138 [2024-06-09 23:13:55.217921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.138 [2024-06-09 23:13:55.217941] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.138 [2024-06-09 23:13:55.217948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.139 [2024-06-09 23:13:55.217953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.139 [2024-06-09 23:13:55.217972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.139 qpair failed and we were unable to recover it. 00:31:27.139 [2024-06-09 23:13:55.227990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.139 [2024-06-09 23:13:55.228087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.139 [2024-06-09 23:13:55.228107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.139 [2024-06-09 23:13:55.228113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.139 [2024-06-09 23:13:55.228118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.139 [2024-06-09 23:13:55.228134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.139 qpair failed and we were unable to recover it. 00:31:27.139 [2024-06-09 23:13:55.238017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.139 [2024-06-09 23:13:55.238107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.139 [2024-06-09 23:13:55.238127] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.139 [2024-06-09 23:13:55.238134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.139 [2024-06-09 23:13:55.238139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.139 [2024-06-09 23:13:55.238155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.139 qpair failed and we were unable to recover it. 00:31:27.139 [2024-06-09 23:13:55.248212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.139 [2024-06-09 23:13:55.248298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.139 [2024-06-09 23:13:55.248312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.139 [2024-06-09 23:13:55.248318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.139 [2024-06-09 23:13:55.248323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.139 [2024-06-09 23:13:55.248336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.139 qpair failed and we were unable to recover it. 00:31:27.139 [2024-06-09 23:13:55.258061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.139 [2024-06-09 23:13:55.258149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.139 [2024-06-09 23:13:55.258163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.139 [2024-06-09 23:13:55.258169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.139 [2024-06-09 23:13:55.258173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.139 [2024-06-09 23:13:55.258186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.139 qpair failed and we were unable to recover it. 00:31:27.139 [2024-06-09 23:13:55.268101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.139 [2024-06-09 23:13:55.268187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.139 [2024-06-09 23:13:55.268207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.139 [2024-06-09 23:13:55.268213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.139 [2024-06-09 23:13:55.268217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.139 [2024-06-09 23:13:55.268230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.139 qpair failed and we were unable to recover it. 00:31:27.139 [2024-06-09 23:13:55.278111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.139 [2024-06-09 23:13:55.278202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.139 [2024-06-09 23:13:55.278215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.139 [2024-06-09 23:13:55.278220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.139 [2024-06-09 23:13:55.278225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.139 [2024-06-09 23:13:55.278237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.139 qpair failed and we were unable to recover it. 00:31:27.139 [2024-06-09 23:13:55.288138] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.139 [2024-06-09 23:13:55.288229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.139 [2024-06-09 23:13:55.288248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.139 [2024-06-09 23:13:55.288255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.139 [2024-06-09 23:13:55.288260] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.139 [2024-06-09 23:13:55.288275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.139 qpair failed and we were unable to recover it. 00:31:27.139 [2024-06-09 23:13:55.298137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.139 [2024-06-09 23:13:55.298228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.139 [2024-06-09 23:13:55.298242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.139 [2024-06-09 23:13:55.298248] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.139 [2024-06-09 23:13:55.298253] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.139 [2024-06-09 23:13:55.298266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.139 qpair failed and we were unable to recover it. 00:31:27.139 [2024-06-09 23:13:55.308224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.139 [2024-06-09 23:13:55.308353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.139 [2024-06-09 23:13:55.308368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.139 [2024-06-09 23:13:55.308374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.139 [2024-06-09 23:13:55.308378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.139 [2024-06-09 23:13:55.308395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.139 qpair failed and we were unable to recover it. 00:31:27.402 [2024-06-09 23:13:55.318228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.402 [2024-06-09 23:13:55.318318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.402 [2024-06-09 23:13:55.318332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.402 [2024-06-09 23:13:55.318337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.402 [2024-06-09 23:13:55.318342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.402 [2024-06-09 23:13:55.318354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.402 qpair failed and we were unable to recover it. 00:31:27.402 [2024-06-09 23:13:55.328236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.402 [2024-06-09 23:13:55.328318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.402 [2024-06-09 23:13:55.328332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.402 [2024-06-09 23:13:55.328338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.402 [2024-06-09 23:13:55.328342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.402 [2024-06-09 23:13:55.328355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.402 qpair failed and we were unable to recover it. 00:31:27.402 [2024-06-09 23:13:55.338283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.402 [2024-06-09 23:13:55.338369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.402 [2024-06-09 23:13:55.338382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.402 [2024-06-09 23:13:55.338388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.402 [2024-06-09 23:13:55.338392] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.402 [2024-06-09 23:13:55.338410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.402 qpair failed and we were unable to recover it. 00:31:27.402 [2024-06-09 23:13:55.348370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.402 [2024-06-09 23:13:55.348457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.402 [2024-06-09 23:13:55.348471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.402 [2024-06-09 23:13:55.348477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.402 [2024-06-09 23:13:55.348481] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.402 [2024-06-09 23:13:55.348493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.402 qpair failed and we were unable to recover it. 00:31:27.402 [2024-06-09 23:13:55.358345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.402 [2024-06-09 23:13:55.358443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.402 [2024-06-09 23:13:55.358460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.402 [2024-06-09 23:13:55.358465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.402 [2024-06-09 23:13:55.358470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.402 [2024-06-09 23:13:55.358482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.402 qpair failed and we were unable to recover it. 00:31:27.402 [2024-06-09 23:13:55.368366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.402 [2024-06-09 23:13:55.368462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.402 [2024-06-09 23:13:55.368476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.402 [2024-06-09 23:13:55.368482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.402 [2024-06-09 23:13:55.368486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.402 [2024-06-09 23:13:55.368499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.402 qpair failed and we were unable to recover it. 00:31:27.402 [2024-06-09 23:13:55.378396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.402 [2024-06-09 23:13:55.378486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.402 [2024-06-09 23:13:55.378500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.402 [2024-06-09 23:13:55.378507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.402 [2024-06-09 23:13:55.378511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.402 [2024-06-09 23:13:55.378523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.402 qpair failed and we were unable to recover it. 00:31:27.402 [2024-06-09 23:13:55.388430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.402 [2024-06-09 23:13:55.388578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.402 [2024-06-09 23:13:55.388591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.402 [2024-06-09 23:13:55.388597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.402 [2024-06-09 23:13:55.388601] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.402 [2024-06-09 23:13:55.388612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.402 qpair failed and we were unable to recover it. 00:31:27.402 [2024-06-09 23:13:55.398432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.402 [2024-06-09 23:13:55.398519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.402 [2024-06-09 23:13:55.398533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.402 [2024-06-09 23:13:55.398538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.402 [2024-06-09 23:13:55.398546] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.402 [2024-06-09 23:13:55.398558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.402 qpair failed and we were unable to recover it. 00:31:27.402 [2024-06-09 23:13:55.408476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.402 [2024-06-09 23:13:55.408556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.402 [2024-06-09 23:13:55.408570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.402 [2024-06-09 23:13:55.408576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.402 [2024-06-09 23:13:55.408580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.402 [2024-06-09 23:13:55.408592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.402 qpair failed and we were unable to recover it. 00:31:27.402 [2024-06-09 23:13:55.418544] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.402 [2024-06-09 23:13:55.418843] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.402 [2024-06-09 23:13:55.418858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.402 [2024-06-09 23:13:55.418863] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.402 [2024-06-09 23:13:55.418867] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.402 [2024-06-09 23:13:55.418879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.402 qpair failed and we were unable to recover it. 00:31:27.402 [2024-06-09 23:13:55.428504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.402 [2024-06-09 23:13:55.428589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.402 [2024-06-09 23:13:55.428602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.402 [2024-06-09 23:13:55.428608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.402 [2024-06-09 23:13:55.428613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.402 [2024-06-09 23:13:55.428625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.402 qpair failed and we were unable to recover it. 00:31:27.402 [2024-06-09 23:13:55.438470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.402 [2024-06-09 23:13:55.438564] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.402 [2024-06-09 23:13:55.438577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.402 [2024-06-09 23:13:55.438583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.402 [2024-06-09 23:13:55.438587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.402 [2024-06-09 23:13:55.438599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.402 qpair failed and we were unable to recover it. 00:31:27.402 [2024-06-09 23:13:55.448500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.403 [2024-06-09 23:13:55.448626] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.403 [2024-06-09 23:13:55.448640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.403 [2024-06-09 23:13:55.448646] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.403 [2024-06-09 23:13:55.448650] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.403 [2024-06-09 23:13:55.448663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.403 qpair failed and we were unable to recover it. 00:31:27.403 [2024-06-09 23:13:55.458596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.403 [2024-06-09 23:13:55.458683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.403 [2024-06-09 23:13:55.458697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.403 [2024-06-09 23:13:55.458703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.403 [2024-06-09 23:13:55.458708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.403 [2024-06-09 23:13:55.458720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.403 qpair failed and we were unable to recover it. 00:31:27.403 [2024-06-09 23:13:55.468632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.403 [2024-06-09 23:13:55.468718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.403 [2024-06-09 23:13:55.468732] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.403 [2024-06-09 23:13:55.468738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.403 [2024-06-09 23:13:55.468742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.403 [2024-06-09 23:13:55.468754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.403 qpair failed and we were unable to recover it. 00:31:27.403 [2024-06-09 23:13:55.478536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.403 [2024-06-09 23:13:55.478625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.403 [2024-06-09 23:13:55.478638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.403 [2024-06-09 23:13:55.478645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.403 [2024-06-09 23:13:55.478649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.403 [2024-06-09 23:13:55.478662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.403 qpair failed and we were unable to recover it. 00:31:27.403 [2024-06-09 23:13:55.488652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.403 [2024-06-09 23:13:55.488737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.403 [2024-06-09 23:13:55.488752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.403 [2024-06-09 23:13:55.488757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.403 [2024-06-09 23:13:55.488765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.403 [2024-06-09 23:13:55.488777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.403 qpair failed and we were unable to recover it. 00:31:27.403 [2024-06-09 23:13:55.498592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.403 [2024-06-09 23:13:55.498681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.403 [2024-06-09 23:13:55.498696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.403 [2024-06-09 23:13:55.498701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.403 [2024-06-09 23:13:55.498706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.403 [2024-06-09 23:13:55.498718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.403 qpair failed and we were unable to recover it. 00:31:27.403 [2024-06-09 23:13:55.508677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.403 [2024-06-09 23:13:55.508798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.403 [2024-06-09 23:13:55.508812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.403 [2024-06-09 23:13:55.508817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.403 [2024-06-09 23:13:55.508822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.403 [2024-06-09 23:13:55.508834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.403 qpair failed and we were unable to recover it. 00:31:27.403 [2024-06-09 23:13:55.518799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.403 [2024-06-09 23:13:55.519063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.403 [2024-06-09 23:13:55.519078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.403 [2024-06-09 23:13:55.519084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.403 [2024-06-09 23:13:55.519088] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.403 [2024-06-09 23:13:55.519100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.403 qpair failed and we were unable to recover it. 00:31:27.403 [2024-06-09 23:13:55.528782] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.403 [2024-06-09 23:13:55.528876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.403 [2024-06-09 23:13:55.528896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.403 [2024-06-09 23:13:55.528903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.403 [2024-06-09 23:13:55.528908] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.403 [2024-06-09 23:13:55.528924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.403 qpair failed and we were unable to recover it. 00:31:27.403 [2024-06-09 23:13:55.538864] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.403 [2024-06-09 23:13:55.538956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.403 [2024-06-09 23:13:55.538977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.403 [2024-06-09 23:13:55.538984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.403 [2024-06-09 23:13:55.538989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.403 [2024-06-09 23:13:55.539005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.403 qpair failed and we were unable to recover it. 00:31:27.403 [2024-06-09 23:13:55.548838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.403 [2024-06-09 23:13:55.548930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.403 [2024-06-09 23:13:55.548950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.403 [2024-06-09 23:13:55.548956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.403 [2024-06-09 23:13:55.548961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.403 [2024-06-09 23:13:55.548977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.403 qpair failed and we were unable to recover it. 00:31:27.403 [2024-06-09 23:13:55.558939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.403 [2024-06-09 23:13:55.559045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.403 [2024-06-09 23:13:55.559061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.403 [2024-06-09 23:13:55.559067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.403 [2024-06-09 23:13:55.559071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.403 [2024-06-09 23:13:55.559085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.403 qpair failed and we were unable to recover it. 00:31:27.403 [2024-06-09 23:13:55.568886] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.403 [2024-06-09 23:13:55.568974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.403 [2024-06-09 23:13:55.568988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.403 [2024-06-09 23:13:55.568994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.403 [2024-06-09 23:13:55.568999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.403 [2024-06-09 23:13:55.569011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.403 qpair failed and we were unable to recover it. 00:31:27.669 [2024-06-09 23:13:55.578916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.669 [2024-06-09 23:13:55.579010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.669 [2024-06-09 23:13:55.579024] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.669 [2024-06-09 23:13:55.579034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.669 [2024-06-09 23:13:55.579039] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.669 [2024-06-09 23:13:55.579051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.669 qpair failed and we were unable to recover it. 00:31:27.669 [2024-06-09 23:13:55.589032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.669 [2024-06-09 23:13:55.589120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.669 [2024-06-09 23:13:55.589133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.669 [2024-06-09 23:13:55.589139] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.669 [2024-06-09 23:13:55.589143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.669 [2024-06-09 23:13:55.589156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.669 qpair failed and we were unable to recover it. 00:31:27.669 [2024-06-09 23:13:55.598994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.669 [2024-06-09 23:13:55.599088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.669 [2024-06-09 23:13:55.599108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.669 [2024-06-09 23:13:55.599114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.669 [2024-06-09 23:13:55.599120] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.669 [2024-06-09 23:13:55.599135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.669 qpair failed and we were unable to recover it. 00:31:27.669 [2024-06-09 23:13:55.609032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.669 [2024-06-09 23:13:55.609124] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.669 [2024-06-09 23:13:55.609144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.669 [2024-06-09 23:13:55.609151] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.669 [2024-06-09 23:13:55.609156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.669 [2024-06-09 23:13:55.609172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.669 qpair failed and we were unable to recover it. 00:31:27.669 [2024-06-09 23:13:55.619052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.669 [2024-06-09 23:13:55.619134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.669 [2024-06-09 23:13:55.619149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.669 [2024-06-09 23:13:55.619154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.669 [2024-06-09 23:13:55.619159] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.669 [2024-06-09 23:13:55.619172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.669 qpair failed and we were unable to recover it. 00:31:27.669 [2024-06-09 23:13:55.629077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.669 [2024-06-09 23:13:55.629165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.669 [2024-06-09 23:13:55.629179] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.669 [2024-06-09 23:13:55.629185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.669 [2024-06-09 23:13:55.629189] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.669 [2024-06-09 23:13:55.629202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.669 qpair failed and we were unable to recover it. 00:31:27.669 [2024-06-09 23:13:55.639129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.669 [2024-06-09 23:13:55.639258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.669 [2024-06-09 23:13:55.639278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.669 [2024-06-09 23:13:55.639284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.669 [2024-06-09 23:13:55.639289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.669 [2024-06-09 23:13:55.639304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.669 qpair failed and we were unable to recover it. 00:31:27.669 [2024-06-09 23:13:55.649147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.669 [2024-06-09 23:13:55.649240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.669 [2024-06-09 23:13:55.649256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.669 [2024-06-09 23:13:55.649262] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.669 [2024-06-09 23:13:55.649267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.669 [2024-06-09 23:13:55.649280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.669 qpair failed and we were unable to recover it. 00:31:27.669 [2024-06-09 23:13:55.659175] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.669 [2024-06-09 23:13:55.659262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.669 [2024-06-09 23:13:55.659276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.669 [2024-06-09 23:13:55.659282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.669 [2024-06-09 23:13:55.659287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.669 [2024-06-09 23:13:55.659299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.669 qpair failed and we were unable to recover it. 00:31:27.669 [2024-06-09 23:13:55.669217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.669 [2024-06-09 23:13:55.669307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.669 [2024-06-09 23:13:55.669322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.669 [2024-06-09 23:13:55.669332] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.669 [2024-06-09 23:13:55.669336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.669 [2024-06-09 23:13:55.669349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.669 qpair failed and we were unable to recover it. 00:31:27.669 [2024-06-09 23:13:55.679228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.669 [2024-06-09 23:13:55.679320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.669 [2024-06-09 23:13:55.679334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.669 [2024-06-09 23:13:55.679340] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.669 [2024-06-09 23:13:55.679345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.669 [2024-06-09 23:13:55.679358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.669 qpair failed and we were unable to recover it. 00:31:27.669 [2024-06-09 23:13:55.689255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.669 [2024-06-09 23:13:55.689339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.669 [2024-06-09 23:13:55.689353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.669 [2024-06-09 23:13:55.689359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.669 [2024-06-09 23:13:55.689364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.669 [2024-06-09 23:13:55.689376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.669 qpair failed and we were unable to recover it. 00:31:27.669 [2024-06-09 23:13:55.699245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.669 [2024-06-09 23:13:55.699326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.669 [2024-06-09 23:13:55.699341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.670 [2024-06-09 23:13:55.699347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.670 [2024-06-09 23:13:55.699351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.670 [2024-06-09 23:13:55.699364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.670 qpair failed and we were unable to recover it. 00:31:27.670 [2024-06-09 23:13:55.709314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.670 [2024-06-09 23:13:55.709398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.670 [2024-06-09 23:13:55.709417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.670 [2024-06-09 23:13:55.709423] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.670 [2024-06-09 23:13:55.709428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.670 [2024-06-09 23:13:55.709441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.670 qpair failed and we were unable to recover it. 00:31:27.670 [2024-06-09 23:13:55.719317] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.670 [2024-06-09 23:13:55.719577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.670 [2024-06-09 23:13:55.719591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.670 [2024-06-09 23:13:55.719597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.670 [2024-06-09 23:13:55.719602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.670 [2024-06-09 23:13:55.719615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.670 qpair failed and we were unable to recover it. 00:31:27.670 [2024-06-09 23:13:55.729354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.670 [2024-06-09 23:13:55.729452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.670 [2024-06-09 23:13:55.729467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.670 [2024-06-09 23:13:55.729473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.670 [2024-06-09 23:13:55.729478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.670 [2024-06-09 23:13:55.729491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.670 qpair failed and we were unable to recover it. 00:31:27.670 [2024-06-09 23:13:55.739357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.670 [2024-06-09 23:13:55.739449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.670 [2024-06-09 23:13:55.739463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.670 [2024-06-09 23:13:55.739469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.670 [2024-06-09 23:13:55.739474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.670 [2024-06-09 23:13:55.739486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.670 qpair failed and we were unable to recover it. 00:31:27.670 [2024-06-09 23:13:55.749436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.670 [2024-06-09 23:13:55.749531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.670 [2024-06-09 23:13:55.749545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.670 [2024-06-09 23:13:55.749551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.670 [2024-06-09 23:13:55.749555] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.670 [2024-06-09 23:13:55.749568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.670 qpair failed and we were unable to recover it. 00:31:27.670 [2024-06-09 23:13:55.759431] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.670 [2024-06-09 23:13:55.759523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.670 [2024-06-09 23:13:55.759540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.670 [2024-06-09 23:13:55.759545] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.670 [2024-06-09 23:13:55.759550] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.670 [2024-06-09 23:13:55.759562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.670 qpair failed and we were unable to recover it. 00:31:27.670 [2024-06-09 23:13:55.769471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.670 [2024-06-09 23:13:55.769565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.670 [2024-06-09 23:13:55.769578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.670 [2024-06-09 23:13:55.769584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.670 [2024-06-09 23:13:55.769589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.670 [2024-06-09 23:13:55.769601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.670 qpair failed and we were unable to recover it. 00:31:27.670 [2024-06-09 23:13:55.779505] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.670 [2024-06-09 23:13:55.779590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.670 [2024-06-09 23:13:55.779603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.670 [2024-06-09 23:13:55.779609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.670 [2024-06-09 23:13:55.779614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.670 [2024-06-09 23:13:55.779627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.670 qpair failed and we were unable to recover it. 00:31:27.670 [2024-06-09 23:13:55.789404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.670 [2024-06-09 23:13:55.789490] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.670 [2024-06-09 23:13:55.789503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.670 [2024-06-09 23:13:55.789510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.670 [2024-06-09 23:13:55.789514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.670 [2024-06-09 23:13:55.789527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.670 qpair failed and we were unable to recover it. 00:31:27.670 [2024-06-09 23:13:55.799563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.670 [2024-06-09 23:13:55.799667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.670 [2024-06-09 23:13:55.799682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.670 [2024-06-09 23:13:55.799688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.670 [2024-06-09 23:13:55.799693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.670 [2024-06-09 23:13:55.799709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.670 qpair failed and we were unable to recover it. 00:31:27.670 [2024-06-09 23:13:55.809581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.670 [2024-06-09 23:13:55.809668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.670 [2024-06-09 23:13:55.809682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.670 [2024-06-09 23:13:55.809687] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.670 [2024-06-09 23:13:55.809692] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.670 [2024-06-09 23:13:55.809704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.670 qpair failed and we were unable to recover it. 00:31:27.670 [2024-06-09 23:13:55.819614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.670 [2024-06-09 23:13:55.819700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.670 [2024-06-09 23:13:55.819714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.670 [2024-06-09 23:13:55.819720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.670 [2024-06-09 23:13:55.819724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.670 [2024-06-09 23:13:55.819737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.670 qpair failed and we were unable to recover it. 00:31:27.670 [2024-06-09 23:13:55.829660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.670 [2024-06-09 23:13:55.829794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.670 [2024-06-09 23:13:55.829809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.670 [2024-06-09 23:13:55.829815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.671 [2024-06-09 23:13:55.829819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.671 [2024-06-09 23:13:55.829832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.671 qpair failed and we were unable to recover it. 00:31:27.671 [2024-06-09 23:13:55.839680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.671 [2024-06-09 23:13:55.839779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.671 [2024-06-09 23:13:55.839793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.671 [2024-06-09 23:13:55.839799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.671 [2024-06-09 23:13:55.839803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.671 [2024-06-09 23:13:55.839816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.671 qpair failed and we were unable to recover it. 00:31:27.954 [2024-06-09 23:13:55.849700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.954 [2024-06-09 23:13:55.849790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.954 [2024-06-09 23:13:55.849807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.954 [2024-06-09 23:13:55.849812] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.954 [2024-06-09 23:13:55.849817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.954 [2024-06-09 23:13:55.849829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.954 qpair failed and we were unable to recover it. 00:31:27.954 [2024-06-09 23:13:55.859708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.954 [2024-06-09 23:13:55.859842] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.954 [2024-06-09 23:13:55.859857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.954 [2024-06-09 23:13:55.859862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.954 [2024-06-09 23:13:55.859867] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.954 [2024-06-09 23:13:55.859879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.954 qpair failed and we were unable to recover it. 00:31:27.954 [2024-06-09 23:13:55.869683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.954 [2024-06-09 23:13:55.869771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.954 [2024-06-09 23:13:55.869784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.954 [2024-06-09 23:13:55.869791] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.954 [2024-06-09 23:13:55.869796] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.954 [2024-06-09 23:13:55.869808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.954 qpair failed and we were unable to recover it. 00:31:27.954 [2024-06-09 23:13:55.879809] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.954 [2024-06-09 23:13:55.879895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.954 [2024-06-09 23:13:55.879909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.954 [2024-06-09 23:13:55.879915] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.954 [2024-06-09 23:13:55.879919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.954 [2024-06-09 23:13:55.879931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.954 qpair failed and we were unable to recover it. 00:31:27.954 [2024-06-09 23:13:55.889810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.954 [2024-06-09 23:13:55.889898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.954 [2024-06-09 23:13:55.889912] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.954 [2024-06-09 23:13:55.889917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.954 [2024-06-09 23:13:55.889925] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.954 [2024-06-09 23:13:55.889938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.954 qpair failed and we were unable to recover it. 00:31:27.954 [2024-06-09 23:13:55.899710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.954 [2024-06-09 23:13:55.899794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.954 [2024-06-09 23:13:55.899808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.954 [2024-06-09 23:13:55.899814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.954 [2024-06-09 23:13:55.899819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.954 [2024-06-09 23:13:55.899831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.954 qpair failed and we were unable to recover it. 00:31:27.954 [2024-06-09 23:13:55.909740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.954 [2024-06-09 23:13:55.909827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.954 [2024-06-09 23:13:55.909840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.954 [2024-06-09 23:13:55.909847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.954 [2024-06-09 23:13:55.909851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.955 [2024-06-09 23:13:55.909863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.955 qpair failed and we were unable to recover it. 00:31:27.955 [2024-06-09 23:13:55.919903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.955 [2024-06-09 23:13:55.919989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.955 [2024-06-09 23:13:55.920002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.955 [2024-06-09 23:13:55.920008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.955 [2024-06-09 23:13:55.920013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.955 [2024-06-09 23:13:55.920026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.955 qpair failed and we were unable to recover it. 00:31:27.955 [2024-06-09 23:13:55.929935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.955 [2024-06-09 23:13:55.930026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.955 [2024-06-09 23:13:55.930047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.955 [2024-06-09 23:13:55.930054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.955 [2024-06-09 23:13:55.930059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.955 [2024-06-09 23:13:55.930075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.955 qpair failed and we were unable to recover it. 00:31:27.955 [2024-06-09 23:13:55.939933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.955 [2024-06-09 23:13:55.940023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.955 [2024-06-09 23:13:55.940043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.955 [2024-06-09 23:13:55.940051] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.955 [2024-06-09 23:13:55.940056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.955 [2024-06-09 23:13:55.940071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.955 qpair failed and we were unable to recover it. 00:31:27.955 [2024-06-09 23:13:55.949993] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.955 [2024-06-09 23:13:55.950083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.955 [2024-06-09 23:13:55.950103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.955 [2024-06-09 23:13:55.950110] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.955 [2024-06-09 23:13:55.950115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.955 [2024-06-09 23:13:55.950131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.955 qpair failed and we were unable to recover it. 00:31:27.955 [2024-06-09 23:13:55.960017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.955 [2024-06-09 23:13:55.960123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.955 [2024-06-09 23:13:55.960143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.955 [2024-06-09 23:13:55.960150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.955 [2024-06-09 23:13:55.960155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.955 [2024-06-09 23:13:55.960171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.955 qpair failed and we were unable to recover it. 00:31:27.955 [2024-06-09 23:13:55.970063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.955 [2024-06-09 23:13:55.970153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.955 [2024-06-09 23:13:55.970173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.955 [2024-06-09 23:13:55.970180] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.955 [2024-06-09 23:13:55.970185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.955 [2024-06-09 23:13:55.970201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.955 qpair failed and we were unable to recover it. 00:31:27.955 [2024-06-09 23:13:55.980044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.955 [2024-06-09 23:13:55.980144] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.955 [2024-06-09 23:13:55.980159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.955 [2024-06-09 23:13:55.980165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.955 [2024-06-09 23:13:55.980173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.955 [2024-06-09 23:13:55.980186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.955 qpair failed and we were unable to recover it. 00:31:27.955 [2024-06-09 23:13:55.990161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.955 [2024-06-09 23:13:55.990255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.955 [2024-06-09 23:13:55.990275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.955 [2024-06-09 23:13:55.990282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.955 [2024-06-09 23:13:55.990287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.955 [2024-06-09 23:13:55.990303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.955 qpair failed and we were unable to recover it. 00:31:27.955 [2024-06-09 23:13:56.000134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.955 [2024-06-09 23:13:56.000229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.955 [2024-06-09 23:13:56.000245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.955 [2024-06-09 23:13:56.000251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.955 [2024-06-09 23:13:56.000255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.955 [2024-06-09 23:13:56.000268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.955 qpair failed and we were unable to recover it. 00:31:27.955 [2024-06-09 23:13:56.010166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.955 [2024-06-09 23:13:56.010252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.955 [2024-06-09 23:13:56.010267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.955 [2024-06-09 23:13:56.010272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.955 [2024-06-09 23:13:56.010277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.955 [2024-06-09 23:13:56.010289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.955 qpair failed and we were unable to recover it. 00:31:27.955 [2024-06-09 23:13:56.020171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.955 [2024-06-09 23:13:56.020255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.955 [2024-06-09 23:13:56.020269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.955 [2024-06-09 23:13:56.020275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.955 [2024-06-09 23:13:56.020280] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.955 [2024-06-09 23:13:56.020292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.955 qpair failed and we were unable to recover it. 00:31:27.955 [2024-06-09 23:13:56.030219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.955 [2024-06-09 23:13:56.030309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.955 [2024-06-09 23:13:56.030324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.955 [2024-06-09 23:13:56.030330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.955 [2024-06-09 23:13:56.030335] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.955 [2024-06-09 23:13:56.030346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.955 qpair failed and we were unable to recover it. 00:31:27.955 [2024-06-09 23:13:56.040245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.955 [2024-06-09 23:13:56.040334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.955 [2024-06-09 23:13:56.040347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.955 [2024-06-09 23:13:56.040353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.955 [2024-06-09 23:13:56.040358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.955 [2024-06-09 23:13:56.040370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.955 qpair failed and we were unable to recover it. 00:31:27.955 [2024-06-09 23:13:56.050288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.955 [2024-06-09 23:13:56.050410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.955 [2024-06-09 23:13:56.050425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.956 [2024-06-09 23:13:56.050430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.956 [2024-06-09 23:13:56.050435] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.956 [2024-06-09 23:13:56.050449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.956 qpair failed and we were unable to recover it. 00:31:27.956 [2024-06-09 23:13:56.060267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.956 [2024-06-09 23:13:56.060355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.956 [2024-06-09 23:13:56.060368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.956 [2024-06-09 23:13:56.060374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.956 [2024-06-09 23:13:56.060379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.956 [2024-06-09 23:13:56.060391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.956 qpair failed and we were unable to recover it. 00:31:27.956 [2024-06-09 23:13:56.070188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.956 [2024-06-09 23:13:56.070277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.956 [2024-06-09 23:13:56.070290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.956 [2024-06-09 23:13:56.070303] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.956 [2024-06-09 23:13:56.070308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.956 [2024-06-09 23:13:56.070321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.956 qpair failed and we were unable to recover it. 00:31:27.956 [2024-06-09 23:13:56.080232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.956 [2024-06-09 23:13:56.080325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.956 [2024-06-09 23:13:56.080339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.956 [2024-06-09 23:13:56.080345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.956 [2024-06-09 23:13:56.080350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.956 [2024-06-09 23:13:56.080362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.956 qpair failed and we were unable to recover it. 00:31:27.956 [2024-06-09 23:13:56.090379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.956 [2024-06-09 23:13:56.090470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.956 [2024-06-09 23:13:56.090483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.956 [2024-06-09 23:13:56.090490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.956 [2024-06-09 23:13:56.090495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.956 [2024-06-09 23:13:56.090508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.956 qpair failed and we were unable to recover it. 00:31:27.956 [2024-06-09 23:13:56.100415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.956 [2024-06-09 23:13:56.100500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.956 [2024-06-09 23:13:56.100514] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.956 [2024-06-09 23:13:56.100520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.956 [2024-06-09 23:13:56.100525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.956 [2024-06-09 23:13:56.100537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.956 qpair failed and we were unable to recover it. 00:31:27.956 [2024-06-09 23:13:56.110448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.956 [2024-06-09 23:13:56.110536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.956 [2024-06-09 23:13:56.110549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.956 [2024-06-09 23:13:56.110555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.956 [2024-06-09 23:13:56.110560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.956 [2024-06-09 23:13:56.110572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.956 qpair failed and we were unable to recover it. 00:31:27.956 [2024-06-09 23:13:56.120436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.956 [2024-06-09 23:13:56.120525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.956 [2024-06-09 23:13:56.120539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.956 [2024-06-09 23:13:56.120544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.956 [2024-06-09 23:13:56.120549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.956 [2024-06-09 23:13:56.120560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.956 qpair failed and we were unable to recover it. 00:31:27.956 [2024-06-09 23:13:56.130473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.956 [2024-06-09 23:13:56.130556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.956 [2024-06-09 23:13:56.130570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.956 [2024-06-09 23:13:56.130576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.956 [2024-06-09 23:13:56.130580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:27.956 [2024-06-09 23:13:56.130592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.956 qpair failed and we were unable to recover it. 00:31:28.217 [2024-06-09 23:13:56.140522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.217 [2024-06-09 23:13:56.140609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.217 [2024-06-09 23:13:56.140623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.217 [2024-06-09 23:13:56.140629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.217 [2024-06-09 23:13:56.140634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.217 [2024-06-09 23:13:56.140646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.217 qpair failed and we were unable to recover it. 00:31:28.217 [2024-06-09 23:13:56.150606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.217 [2024-06-09 23:13:56.150721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.217 [2024-06-09 23:13:56.150736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.217 [2024-06-09 23:13:56.150742] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.217 [2024-06-09 23:13:56.150746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.217 [2024-06-09 23:13:56.150760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.217 qpair failed and we were unable to recover it. 00:31:28.217 [2024-06-09 23:13:56.160537] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.217 [2024-06-09 23:13:56.160630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.217 [2024-06-09 23:13:56.160645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.217 [2024-06-09 23:13:56.160654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.217 [2024-06-09 23:13:56.160658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.217 [2024-06-09 23:13:56.160671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.217 qpair failed and we were unable to recover it. 00:31:28.217 [2024-06-09 23:13:56.170589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.217 [2024-06-09 23:13:56.170670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.217 [2024-06-09 23:13:56.170684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.217 [2024-06-09 23:13:56.170690] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.217 [2024-06-09 23:13:56.170695] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.217 [2024-06-09 23:13:56.170707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.217 qpair failed and we were unable to recover it. 00:31:28.217 [2024-06-09 23:13:56.180587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.217 [2024-06-09 23:13:56.180676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.217 [2024-06-09 23:13:56.180690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.217 [2024-06-09 23:13:56.180696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.217 [2024-06-09 23:13:56.180701] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.217 [2024-06-09 23:13:56.180713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.217 qpair failed and we were unable to recover it. 00:31:28.217 [2024-06-09 23:13:56.190670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.217 [2024-06-09 23:13:56.190754] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.217 [2024-06-09 23:13:56.190768] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.217 [2024-06-09 23:13:56.190774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.217 [2024-06-09 23:13:56.190779] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.217 [2024-06-09 23:13:56.190791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.217 qpair failed and we were unable to recover it. 00:31:28.217 [2024-06-09 23:13:56.200705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.217 [2024-06-09 23:13:56.200799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.217 [2024-06-09 23:13:56.200814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.217 [2024-06-09 23:13:56.200820] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.217 [2024-06-09 23:13:56.200824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.217 [2024-06-09 23:13:56.200837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.217 qpair failed and we were unable to recover it. 00:31:28.217 [2024-06-09 23:13:56.210704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.217 [2024-06-09 23:13:56.210791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.218 [2024-06-09 23:13:56.210805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.218 [2024-06-09 23:13:56.210811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.218 [2024-06-09 23:13:56.210815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.218 [2024-06-09 23:13:56.210828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.218 qpair failed and we were unable to recover it. 00:31:28.218 [2024-06-09 23:13:56.220716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.218 [2024-06-09 23:13:56.220805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.218 [2024-06-09 23:13:56.220819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.218 [2024-06-09 23:13:56.220825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.218 [2024-06-09 23:13:56.220829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.218 [2024-06-09 23:13:56.220842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.218 qpair failed and we were unable to recover it. 00:31:28.218 [2024-06-09 23:13:56.230746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.218 [2024-06-09 23:13:56.230835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.218 [2024-06-09 23:13:56.230849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.218 [2024-06-09 23:13:56.230855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.218 [2024-06-09 23:13:56.230859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.218 [2024-06-09 23:13:56.230871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.218 qpair failed and we were unable to recover it. 00:31:28.218 [2024-06-09 23:13:56.240796] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.218 [2024-06-09 23:13:56.240894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.218 [2024-06-09 23:13:56.240908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.218 [2024-06-09 23:13:56.240913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.218 [2024-06-09 23:13:56.240918] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.218 [2024-06-09 23:13:56.240930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.218 qpair failed and we were unable to recover it. 00:31:28.218 [2024-06-09 23:13:56.250849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.218 [2024-06-09 23:13:56.250972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.218 [2024-06-09 23:13:56.250989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.218 [2024-06-09 23:13:56.250995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.218 [2024-06-09 23:13:56.250999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.218 [2024-06-09 23:13:56.251011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.218 qpair failed and we were unable to recover it. 00:31:28.218 [2024-06-09 23:13:56.260841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.218 [2024-06-09 23:13:56.260931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.218 [2024-06-09 23:13:56.260952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.218 [2024-06-09 23:13:56.260959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.218 [2024-06-09 23:13:56.260963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.218 [2024-06-09 23:13:56.260979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.218 qpair failed and we were unable to recover it. 00:31:28.218 [2024-06-09 23:13:56.270937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.218 [2024-06-09 23:13:56.271025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.218 [2024-06-09 23:13:56.271040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.218 [2024-06-09 23:13:56.271047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.218 [2024-06-09 23:13:56.271052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.218 [2024-06-09 23:13:56.271065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.218 qpair failed and we were unable to recover it. 00:31:28.218 [2024-06-09 23:13:56.280901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.218 [2024-06-09 23:13:56.281017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.218 [2024-06-09 23:13:56.281031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.218 [2024-06-09 23:13:56.281037] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.218 [2024-06-09 23:13:56.281042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.218 [2024-06-09 23:13:56.281055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.218 qpair failed and we were unable to recover it. 00:31:28.218 [2024-06-09 23:13:56.290928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.218 [2024-06-09 23:13:56.291016] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.218 [2024-06-09 23:13:56.291030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.218 [2024-06-09 23:13:56.291036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.218 [2024-06-09 23:13:56.291041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.218 [2024-06-09 23:13:56.291058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.218 qpair failed and we were unable to recover it. 00:31:28.218 [2024-06-09 23:13:56.300823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.218 [2024-06-09 23:13:56.300911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.218 [2024-06-09 23:13:56.300931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.218 [2024-06-09 23:13:56.300938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.218 [2024-06-09 23:13:56.300943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.218 [2024-06-09 23:13:56.300958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.218 qpair failed and we were unable to recover it. 00:31:28.218 [2024-06-09 23:13:56.310870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.218 [2024-06-09 23:13:56.310958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.218 [2024-06-09 23:13:56.310973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.218 [2024-06-09 23:13:56.310979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.218 [2024-06-09 23:13:56.310984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.218 [2024-06-09 23:13:56.310996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.218 qpair failed and we were unable to recover it. 00:31:28.218 [2024-06-09 23:13:56.320991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.218 [2024-06-09 23:13:56.321082] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.218 [2024-06-09 23:13:56.321102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.218 [2024-06-09 23:13:56.321109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.219 [2024-06-09 23:13:56.321114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.219 [2024-06-09 23:13:56.321129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.219 qpair failed and we were unable to recover it. 00:31:28.219 [2024-06-09 23:13:56.331029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.219 [2024-06-09 23:13:56.331121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.219 [2024-06-09 23:13:56.331141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.219 [2024-06-09 23:13:56.331147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.219 [2024-06-09 23:13:56.331152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.219 [2024-06-09 23:13:56.331168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.219 qpair failed and we were unable to recover it. 00:31:28.219 [2024-06-09 23:13:56.340935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.219 [2024-06-09 23:13:56.341026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.219 [2024-06-09 23:13:56.341050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.219 [2024-06-09 23:13:56.341057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.219 [2024-06-09 23:13:56.341062] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.219 [2024-06-09 23:13:56.341078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.219 qpair failed and we were unable to recover it. 00:31:28.219 [2024-06-09 23:13:56.351059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.219 [2024-06-09 23:13:56.351149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.219 [2024-06-09 23:13:56.351169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.219 [2024-06-09 23:13:56.351176] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.219 [2024-06-09 23:13:56.351181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.219 [2024-06-09 23:13:56.351197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.219 qpair failed and we were unable to recover it. 00:31:28.219 [2024-06-09 23:13:56.361115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.219 [2024-06-09 23:13:56.361212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.219 [2024-06-09 23:13:56.361233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.219 [2024-06-09 23:13:56.361239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.219 [2024-06-09 23:13:56.361244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.219 [2024-06-09 23:13:56.361260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.219 qpair failed and we were unable to recover it. 00:31:28.219 [2024-06-09 23:13:56.371134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.219 [2024-06-09 23:13:56.371231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.219 [2024-06-09 23:13:56.371251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.219 [2024-06-09 23:13:56.371258] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.219 [2024-06-09 23:13:56.371263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.219 [2024-06-09 23:13:56.371278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.219 qpair failed and we were unable to recover it. 00:31:28.219 [2024-06-09 23:13:56.381126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.219 [2024-06-09 23:13:56.381210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.219 [2024-06-09 23:13:56.381224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.219 [2024-06-09 23:13:56.381231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.219 [2024-06-09 23:13:56.381236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.219 [2024-06-09 23:13:56.381252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.219 qpair failed and we were unable to recover it. 00:31:28.219 [2024-06-09 23:13:56.391136] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.219 [2024-06-09 23:13:56.391254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.219 [2024-06-09 23:13:56.391268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.219 [2024-06-09 23:13:56.391274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.219 [2024-06-09 23:13:56.391278] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.219 [2024-06-09 23:13:56.391290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.219 qpair failed and we were unable to recover it. 00:31:28.480 [2024-06-09 23:13:56.401224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.480 [2024-06-09 23:13:56.401320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.480 [2024-06-09 23:13:56.401335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.480 [2024-06-09 23:13:56.401341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.480 [2024-06-09 23:13:56.401345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.480 [2024-06-09 23:13:56.401358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.480 qpair failed and we were unable to recover it. 00:31:28.480 [2024-06-09 23:13:56.411250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.480 [2024-06-09 23:13:56.411334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.480 [2024-06-09 23:13:56.411348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.480 [2024-06-09 23:13:56.411354] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.480 [2024-06-09 23:13:56.411358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.480 [2024-06-09 23:13:56.411370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.480 qpair failed and we were unable to recover it. 00:31:28.480 [2024-06-09 23:13:56.421143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.480 [2024-06-09 23:13:56.421231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.480 [2024-06-09 23:13:56.421244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.480 [2024-06-09 23:13:56.421250] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.480 [2024-06-09 23:13:56.421255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.480 [2024-06-09 23:13:56.421267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.480 qpair failed and we were unable to recover it. 00:31:28.480 [2024-06-09 23:13:56.431163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.480 [2024-06-09 23:13:56.431251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.480 [2024-06-09 23:13:56.431267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.480 [2024-06-09 23:13:56.431273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.480 [2024-06-09 23:13:56.431277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.480 [2024-06-09 23:13:56.431289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.480 qpair failed and we were unable to recover it. 00:31:28.480 [2024-06-09 23:13:56.441386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.480 [2024-06-09 23:13:56.441479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.480 [2024-06-09 23:13:56.441493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.480 [2024-06-09 23:13:56.441500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.480 [2024-06-09 23:13:56.441504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.481 [2024-06-09 23:13:56.441516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.481 qpair failed and we were unable to recover it. 00:31:28.481 [2024-06-09 23:13:56.451353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.481 [2024-06-09 23:13:56.451437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.481 [2024-06-09 23:13:56.451451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.481 [2024-06-09 23:13:56.451457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.481 [2024-06-09 23:13:56.451462] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.481 [2024-06-09 23:13:56.451474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.481 qpair failed and we were unable to recover it. 00:31:28.481 [2024-06-09 23:13:56.461357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.481 [2024-06-09 23:13:56.461449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.481 [2024-06-09 23:13:56.461463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.481 [2024-06-09 23:13:56.461470] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.481 [2024-06-09 23:13:56.461474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.481 [2024-06-09 23:13:56.461487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.481 qpair failed and we were unable to recover it. 00:31:28.481 [2024-06-09 23:13:56.471467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.481 [2024-06-09 23:13:56.471550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.481 [2024-06-09 23:13:56.471564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.481 [2024-06-09 23:13:56.471570] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.481 [2024-06-09 23:13:56.471578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.481 [2024-06-09 23:13:56.471591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.481 qpair failed and we were unable to recover it. 00:31:28.481 [2024-06-09 23:13:56.481445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.481 [2024-06-09 23:13:56.481534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.481 [2024-06-09 23:13:56.481547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.481 [2024-06-09 23:13:56.481554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.481 [2024-06-09 23:13:56.481558] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.481 [2024-06-09 23:13:56.481570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.481 qpair failed and we were unable to recover it. 00:31:28.481 [2024-06-09 23:13:56.491455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.481 [2024-06-09 23:13:56.491536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.481 [2024-06-09 23:13:56.491549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.481 [2024-06-09 23:13:56.491556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.481 [2024-06-09 23:13:56.491560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.481 [2024-06-09 23:13:56.491572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.481 qpair failed and we were unable to recover it. 00:31:28.481 [2024-06-09 23:13:56.501473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.481 [2024-06-09 23:13:56.501605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.481 [2024-06-09 23:13:56.501619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.481 [2024-06-09 23:13:56.501625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.481 [2024-06-09 23:13:56.501630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.481 [2024-06-09 23:13:56.501642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.481 qpair failed and we were unable to recover it. 00:31:28.481 [2024-06-09 23:13:56.511500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.481 [2024-06-09 23:13:56.511584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.481 [2024-06-09 23:13:56.511598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.481 [2024-06-09 23:13:56.511604] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.481 [2024-06-09 23:13:56.511609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.481 [2024-06-09 23:13:56.511621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.481 qpair failed and we were unable to recover it. 00:31:28.481 [2024-06-09 23:13:56.521515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.481 [2024-06-09 23:13:56.521610] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.481 [2024-06-09 23:13:56.521624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.481 [2024-06-09 23:13:56.521630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.481 [2024-06-09 23:13:56.521635] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.481 [2024-06-09 23:13:56.521647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.481 qpair failed and we were unable to recover it. 00:31:28.481 [2024-06-09 23:13:56.531508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.481 [2024-06-09 23:13:56.531625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.481 [2024-06-09 23:13:56.531639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.481 [2024-06-09 23:13:56.531645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.481 [2024-06-09 23:13:56.531649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.481 [2024-06-09 23:13:56.531661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.481 qpair failed and we were unable to recover it. 00:31:28.481 [2024-06-09 23:13:56.541566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.481 [2024-06-09 23:13:56.541650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.481 [2024-06-09 23:13:56.541662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.481 [2024-06-09 23:13:56.541669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.481 [2024-06-09 23:13:56.541673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.481 [2024-06-09 23:13:56.541684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.481 qpair failed and we were unable to recover it. 00:31:28.481 [2024-06-09 23:13:56.551641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.481 [2024-06-09 23:13:56.551727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.481 [2024-06-09 23:13:56.551740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.481 [2024-06-09 23:13:56.551747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.481 [2024-06-09 23:13:56.551751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.481 [2024-06-09 23:13:56.551765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.481 qpair failed and we were unable to recover it. 00:31:28.481 [2024-06-09 23:13:56.561658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.481 [2024-06-09 23:13:56.561757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.481 [2024-06-09 23:13:56.561771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.481 [2024-06-09 23:13:56.561779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.481 [2024-06-09 23:13:56.561783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.482 [2024-06-09 23:13:56.561796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.482 qpair failed and we were unable to recover it. 00:31:28.482 [2024-06-09 23:13:56.571714] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.482 [2024-06-09 23:13:56.571801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.482 [2024-06-09 23:13:56.571815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.482 [2024-06-09 23:13:56.571821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.482 [2024-06-09 23:13:56.571825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.482 [2024-06-09 23:13:56.571837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.482 qpair failed and we were unable to recover it. 00:31:28.482 [2024-06-09 23:13:56.581747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.482 [2024-06-09 23:13:56.581866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.482 [2024-06-09 23:13:56.581879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.482 [2024-06-09 23:13:56.581885] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.482 [2024-06-09 23:13:56.581889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.482 [2024-06-09 23:13:56.581900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.482 qpair failed and we were unable to recover it. 00:31:28.482 [2024-06-09 23:13:56.591686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.482 [2024-06-09 23:13:56.591778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.482 [2024-06-09 23:13:56.591792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.482 [2024-06-09 23:13:56.591798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.482 [2024-06-09 23:13:56.591802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.482 [2024-06-09 23:13:56.591815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.482 qpair failed and we were unable to recover it. 00:31:28.482 [2024-06-09 23:13:56.601631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.482 [2024-06-09 23:13:56.601893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.482 [2024-06-09 23:13:56.601908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.482 [2024-06-09 23:13:56.601913] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.482 [2024-06-09 23:13:56.601918] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.482 [2024-06-09 23:13:56.601929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.482 qpair failed and we were unable to recover it. 00:31:28.482 [2024-06-09 23:13:56.611788] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.482 [2024-06-09 23:13:56.611883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.482 [2024-06-09 23:13:56.611903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.482 [2024-06-09 23:13:56.611910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.482 [2024-06-09 23:13:56.611916] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.482 [2024-06-09 23:13:56.611931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.482 qpair failed and we were unable to recover it. 00:31:28.482 [2024-06-09 23:13:56.621679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.482 [2024-06-09 23:13:56.621764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.482 [2024-06-09 23:13:56.621779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.482 [2024-06-09 23:13:56.621785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.482 [2024-06-09 23:13:56.621790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.482 [2024-06-09 23:13:56.621803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.482 qpair failed and we were unable to recover it. 00:31:28.482 [2024-06-09 23:13:56.631849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.482 [2024-06-09 23:13:56.631938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.482 [2024-06-09 23:13:56.631952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.482 [2024-06-09 23:13:56.631958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.482 [2024-06-09 23:13:56.631962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.482 [2024-06-09 23:13:56.631975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.482 qpair failed and we were unable to recover it. 00:31:28.482 [2024-06-09 23:13:56.641847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.482 [2024-06-09 23:13:56.641942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.482 [2024-06-09 23:13:56.641956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.482 [2024-06-09 23:13:56.641962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.482 [2024-06-09 23:13:56.641966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.482 [2024-06-09 23:13:56.641979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.482 qpair failed and we were unable to recover it. 00:31:28.482 [2024-06-09 23:13:56.651903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.482 [2024-06-09 23:13:56.651992] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.482 [2024-06-09 23:13:56.652006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.482 [2024-06-09 23:13:56.652016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.482 [2024-06-09 23:13:56.652020] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.482 [2024-06-09 23:13:56.652033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.482 qpair failed and we were unable to recover it. 00:31:28.743 [2024-06-09 23:13:56.661922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.743 [2024-06-09 23:13:56.662019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.743 [2024-06-09 23:13:56.662039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.743 [2024-06-09 23:13:56.662046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.743 [2024-06-09 23:13:56.662051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd284000b90 00:31:28.743 [2024-06-09 23:13:56.662067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:28.743 qpair failed and we were unable to recover it. 00:31:28.743 Read completed with error (sct=0, sc=8) 00:31:28.743 starting I/O failed 00:31:28.743 Read completed with error (sct=0, sc=8) 00:31:28.743 starting I/O failed 00:31:28.743 Read completed with error (sct=0, sc=8) 00:31:28.743 starting I/O failed 00:31:28.743 Read completed with error (sct=0, sc=8) 00:31:28.743 starting I/O failed 00:31:28.743 Read completed with error (sct=0, sc=8) 00:31:28.743 starting I/O failed 00:31:28.743 Read completed with error (sct=0, sc=8) 00:31:28.743 starting I/O failed 00:31:28.743 Read completed with error (sct=0, sc=8) 00:31:28.743 starting I/O failed 00:31:28.743 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 [2024-06-09 23:13:56.662963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:28.744 [2024-06-09 23:13:56.672147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.744 [2024-06-09 23:13:56.672448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.744 [2024-06-09 23:13:56.672506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.744 [2024-06-09 23:13:56.672529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.744 [2024-06-09 23:13:56.672558] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd27c000b90 00:31:28.744 [2024-06-09 23:13:56.672609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:28.744 qpair failed and we were unable to recover it. 00:31:28.744 [2024-06-09 23:13:56.682058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.744 [2024-06-09 23:13:56.682267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.744 [2024-06-09 23:13:56.682301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.744 [2024-06-09 23:13:56.682318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.744 [2024-06-09 23:13:56.682332] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd27c000b90 00:31:28.744 [2024-06-09 23:13:56.682365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:28.744 qpair failed and we were unable to recover it. 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Write completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 [2024-06-09 23:13:56.682816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.744 [2024-06-09 23:13:56.692031] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.744 [2024-06-09 23:13:56.692147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.744 [2024-06-09 23:13:56.692170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.744 [2024-06-09 23:13:56.692178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.744 [2024-06-09 23:13:56.692185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd28c000b90 00:31:28.744 [2024-06-09 23:13:56.692208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.744 qpair failed and we were unable to recover it. 00:31:28.744 [2024-06-09 23:13:56.702056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.744 [2024-06-09 23:13:56.702184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.744 [2024-06-09 23:13:56.702211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.744 [2024-06-09 23:13:56.702220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.744 [2024-06-09 23:13:56.702227] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd28c000b90 00:31:28.744 [2024-06-09 23:13:56.702249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:28.744 qpair failed and we were unable to recover it. 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.744 Read completed with error (sct=0, sc=8) 00:31:28.744 starting I/O failed 00:31:28.745 Read completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Read completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Read completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Read completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Write completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Read completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Write completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Read completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Write completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Read completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Read completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Read completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Read completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Read completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Read completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Read completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Write completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Write completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 Write completed with error (sct=0, sc=8) 00:31:28.745 starting I/O failed 00:31:28.745 [2024-06-09 23:13:56.702686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:28.745 [2024-06-09 23:13:56.712095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.745 [2024-06-09 23:13:56.712207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.745 [2024-06-09 23:13:56.712230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.745 [2024-06-09 23:13:56.712238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.745 [2024-06-09 23:13:56.712245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2d90 00:31:28.745 [2024-06-09 23:13:56.712263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:28.745 qpair failed and we were unable to recover it. 00:31:28.745 [2024-06-09 23:13:56.722094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:28.745 [2024-06-09 23:13:56.722236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:28.745 [2024-06-09 23:13:56.722263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:28.745 [2024-06-09 23:13:56.722272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:28.745 [2024-06-09 23:13:56.722279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8a2d90 00:31:28.745 [2024-06-09 23:13:56.722300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:28.745 qpair failed and we were unable to recover it. 00:31:28.745 [2024-06-09 23:13:56.722683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a1b50 is same with the state(5) to be set 00:31:28.745 [2024-06-09 23:13:56.722969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a1b50 (9): Bad file descriptor 00:31:28.745 Initializing NVMe Controllers 00:31:28.745 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.745 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:28.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:28.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:28.745 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:28.745 Initialization complete. Launching workers. 00:31:28.745 Starting thread on core 1 00:31:28.745 Starting thread on core 2 00:31:28.745 Starting thread on core 3 00:31:28.745 Starting thread on core 0 00:31:28.745 23:13:56 -- host/target_disconnect.sh@59 -- # sync 00:31:28.745 00:31:28.745 real 0m11.323s 00:31:28.745 user 0m19.595s 00:31:28.745 sys 0m4.437s 00:31:28.745 23:13:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:28.745 23:13:56 -- common/autotest_common.sh@10 -- # set +x 00:31:28.745 ************************************ 00:31:28.745 END TEST nvmf_target_disconnect_tc2 00:31:28.745 ************************************ 00:31:28.745 23:13:56 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:31:28.745 23:13:56 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:28.745 23:13:56 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:31:28.745 23:13:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:28.745 23:13:56 -- nvmf/common.sh@116 -- # sync 00:31:28.745 23:13:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:28.745 23:13:56 -- nvmf/common.sh@119 -- # set +e 00:31:28.745 23:13:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:28.745 23:13:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:28.745 rmmod nvme_tcp 00:31:28.745 rmmod nvme_fabrics 00:31:28.745 rmmod nvme_keyring 00:31:28.745 23:13:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:28.745 23:13:56 -- nvmf/common.sh@123 -- # set -e 00:31:28.745 23:13:56 -- nvmf/common.sh@124 -- # return 0 00:31:28.745 23:13:56 -- nvmf/common.sh@477 -- # '[' -n 111320 ']' 00:31:28.745 23:13:56 -- nvmf/common.sh@478 -- # killprocess 111320 00:31:28.745 23:13:56 -- common/autotest_common.sh@926 -- # '[' -z 111320 ']' 00:31:28.745 23:13:56 -- common/autotest_common.sh@930 -- # kill -0 111320 00:31:28.745 23:13:56 -- common/autotest_common.sh@931 -- # uname 00:31:28.745 23:13:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:28.745 23:13:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111320 00:31:28.745 23:13:56 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:31:28.745 23:13:56 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:31:28.745 23:13:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111320' 00:31:28.745 killing process with pid 111320 00:31:28.745 23:13:56 -- common/autotest_common.sh@945 -- # kill 111320 00:31:28.745 23:13:56 -- common/autotest_common.sh@950 -- # wait 111320 00:31:29.004 23:13:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:29.004 23:13:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:29.004 23:13:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:29.004 23:13:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:29.004 23:13:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:29.004 23:13:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.004 23:13:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:29.004 23:13:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.913 23:13:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:30.913 00:31:30.913 real 0m20.982s 00:31:30.913 user 0m47.115s 00:31:30.913 sys 0m9.967s 00:31:30.913 23:13:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:30.913 23:13:59 -- common/autotest_common.sh@10 -- # set +x 00:31:30.913 ************************************ 00:31:30.913 END TEST nvmf_target_disconnect 00:31:30.913 ************************************ 00:31:31.174 23:13:59 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:31:31.174 23:13:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:31.174 23:13:59 -- common/autotest_common.sh@10 -- # set +x 00:31:31.174 23:13:59 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:31:31.174 00:31:31.174 real 24m21.134s 00:31:31.174 user 65m7.962s 00:31:31.174 sys 6m34.603s 00:31:31.174 23:13:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:31.174 23:13:59 -- common/autotest_common.sh@10 -- # set +x 00:31:31.174 ************************************ 00:31:31.174 END TEST nvmf_tcp 00:31:31.174 ************************************ 00:31:31.174 23:13:59 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:31:31.174 23:13:59 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:31.174 23:13:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:31.174 23:13:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:31.174 23:13:59 -- common/autotest_common.sh@10 -- # set +x 00:31:31.174 ************************************ 00:31:31.174 START TEST spdkcli_nvmf_tcp 00:31:31.174 ************************************ 00:31:31.174 23:13:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:31.174 * Looking for test storage... 00:31:31.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:31.174 23:13:59 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:31.174 23:13:59 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:31.174 23:13:59 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:31.174 23:13:59 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:31.174 23:13:59 -- nvmf/common.sh@7 -- # uname -s 00:31:31.174 23:13:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:31.174 23:13:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:31.174 23:13:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:31.174 23:13:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:31.174 23:13:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:31.174 23:13:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:31.174 23:13:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:31.174 23:13:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:31.174 23:13:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:31.174 23:13:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:31.174 23:13:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:31.174 23:13:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:31.174 23:13:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:31.174 23:13:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:31.174 23:13:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:31.174 23:13:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:31.174 23:13:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:31.174 23:13:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:31.174 23:13:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:31.174 23:13:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.174 23:13:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.174 23:13:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.174 23:13:59 -- paths/export.sh@5 -- # export PATH 00:31:31.174 23:13:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:31.174 23:13:59 -- nvmf/common.sh@46 -- # : 0 00:31:31.174 23:13:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:31.174 23:13:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:31.174 23:13:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:31.174 23:13:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:31.174 23:13:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:31.174 23:13:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:31.174 23:13:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:31.174 23:13:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:31.174 23:13:59 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:31.174 23:13:59 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:31.174 23:13:59 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:31.174 23:13:59 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:31.174 23:13:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:31.174 23:13:59 -- common/autotest_common.sh@10 -- # set +x 00:31:31.174 23:13:59 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:31.174 23:13:59 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=113218 00:31:31.174 23:13:59 -- spdkcli/common.sh@34 -- # waitforlisten 113218 00:31:31.174 23:13:59 -- common/autotest_common.sh@819 -- # '[' -z 113218 ']' 00:31:31.174 23:13:59 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:31.174 23:13:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.174 23:13:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:31.174 23:13:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.174 23:13:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:31.174 23:13:59 -- common/autotest_common.sh@10 -- # set +x 00:31:31.434 [2024-06-09 23:13:59.374387] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:31.434 [2024-06-09 23:13:59.374450] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113218 ] 00:31:31.434 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.434 [2024-06-09 23:13:59.432571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:31.434 [2024-06-09 23:13:59.495586] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:31.434 [2024-06-09 23:13:59.495793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.434 [2024-06-09 23:13:59.495799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.004 23:14:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:32.004 23:14:00 -- common/autotest_common.sh@852 -- # return 0 00:31:32.004 23:14:00 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:32.004 23:14:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:32.004 23:14:00 -- common/autotest_common.sh@10 -- # set +x 00:31:32.004 23:14:00 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:32.004 23:14:00 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:32.004 23:14:00 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:32.004 23:14:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:32.004 23:14:00 -- common/autotest_common.sh@10 -- # set +x 00:31:32.004 23:14:00 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:32.004 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:32.004 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:32.004 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:32.004 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:32.004 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:32.004 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:32.004 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:32.004 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:32.004 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:32.004 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:32.004 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:32.004 ' 00:31:32.575 [2024-06-09 23:14:00.495986] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:34.486 [2024-06-09 23:14:02.497108] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.867 [2024-06-09 23:14:03.660929] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:37.773 [2024-06-09 23:14:05.803272] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:39.683 [2024-06-09 23:14:07.636841] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:41.067 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:41.067 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:41.067 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:41.067 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:41.068 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:41.068 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:41.068 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:41.068 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:41.068 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:41.068 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:41.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:41.068 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:41.068 23:14:09 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:41.068 23:14:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:41.068 23:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:41.068 23:14:09 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:41.068 23:14:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:41.068 23:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:41.068 23:14:09 -- spdkcli/nvmf.sh@69 -- # check_match 00:31:41.068 23:14:09 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:41.637 23:14:09 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:41.637 23:14:09 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:41.637 23:14:09 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:41.637 23:14:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:41.637 23:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:41.637 23:14:09 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:41.637 23:14:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:41.637 23:14:09 -- common/autotest_common.sh@10 -- # set +x 00:31:41.637 23:14:09 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:41.637 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:41.637 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:41.637 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:41.637 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:41.637 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:41.637 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:41.637 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:41.637 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:41.637 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:41.637 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:41.637 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:41.637 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:41.637 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:41.637 ' 00:31:46.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:46.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:46.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:46.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:46.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:46.922 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:46.922 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:46.922 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:46.922 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:46.922 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:46.922 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:46.922 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:46.922 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:46.922 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:46.922 23:14:14 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:46.922 23:14:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:46.922 23:14:14 -- common/autotest_common.sh@10 -- # set +x 00:31:46.922 23:14:14 -- spdkcli/nvmf.sh@90 -- # killprocess 113218 00:31:46.922 23:14:14 -- common/autotest_common.sh@926 -- # '[' -z 113218 ']' 00:31:46.922 23:14:14 -- common/autotest_common.sh@930 -- # kill -0 113218 00:31:46.922 23:14:14 -- common/autotest_common.sh@931 -- # uname 00:31:46.922 23:14:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:46.922 23:14:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113218 00:31:46.922 23:14:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:46.922 23:14:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:46.922 23:14:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113218' 00:31:46.922 killing process with pid 113218 00:31:46.922 23:14:14 -- common/autotest_common.sh@945 -- # kill 113218 00:31:46.922 [2024-06-09 23:14:14.594079] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:46.922 23:14:14 -- common/autotest_common.sh@950 -- # wait 113218 00:31:46.922 23:14:14 -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:46.922 23:14:14 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:46.922 23:14:14 -- spdkcli/common.sh@13 -- # '[' -n 113218 ']' 00:31:46.922 23:14:14 -- spdkcli/common.sh@14 -- # killprocess 113218 00:31:46.922 23:14:14 -- common/autotest_common.sh@926 -- # '[' -z 113218 ']' 00:31:46.922 23:14:14 -- common/autotest_common.sh@930 -- # kill -0 113218 00:31:46.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (113218) - No such process 00:31:46.922 23:14:14 -- common/autotest_common.sh@953 -- # echo 'Process with pid 113218 is not found' 00:31:46.922 Process with pid 113218 is not found 00:31:46.922 23:14:14 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:46.922 23:14:14 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:46.922 23:14:14 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:46.922 00:31:46.922 real 0m15.524s 00:31:46.922 user 0m31.952s 00:31:46.922 sys 0m0.692s 00:31:46.922 23:14:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:46.922 23:14:14 -- common/autotest_common.sh@10 -- # set +x 00:31:46.922 ************************************ 00:31:46.922 END TEST spdkcli_nvmf_tcp 00:31:46.922 ************************************ 00:31:46.922 23:14:14 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:46.922 23:14:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:46.922 23:14:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:46.922 23:14:14 -- common/autotest_common.sh@10 -- # set +x 00:31:46.922 ************************************ 00:31:46.922 START TEST nvmf_identify_passthru 00:31:46.922 ************************************ 00:31:46.922 23:14:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:46.922 * Looking for test storage... 00:31:46.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:46.922 23:14:14 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.922 23:14:14 -- nvmf/common.sh@7 -- # uname -s 00:31:46.922 23:14:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.922 23:14:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.922 23:14:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.922 23:14:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.922 23:14:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.922 23:14:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.922 23:14:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.922 23:14:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.922 23:14:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.922 23:14:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.922 23:14:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:46.922 23:14:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:46.922 23:14:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.922 23:14:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.922 23:14:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.922 23:14:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.922 23:14:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.922 23:14:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.922 23:14:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.922 23:14:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.922 23:14:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.922 23:14:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.922 23:14:14 -- paths/export.sh@5 -- # export PATH 00:31:46.922 23:14:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.922 23:14:14 -- nvmf/common.sh@46 -- # : 0 00:31:46.922 23:14:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:46.922 23:14:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:46.922 23:14:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:46.922 23:14:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.922 23:14:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.922 23:14:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:46.922 23:14:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:46.922 23:14:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:46.922 23:14:14 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.922 23:14:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.922 23:14:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.922 23:14:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.922 23:14:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.922 23:14:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.922 23:14:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.922 23:14:14 -- paths/export.sh@5 -- # export PATH 00:31:46.922 23:14:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.922 23:14:14 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:46.922 23:14:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:46.922 23:14:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.922 23:14:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:46.922 23:14:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:46.922 23:14:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:46.922 23:14:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.922 23:14:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:46.922 23:14:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.922 23:14:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:46.922 23:14:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:46.922 23:14:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:46.922 23:14:14 -- common/autotest_common.sh@10 -- # set +x 00:31:53.509 23:14:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:53.509 23:14:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:53.509 23:14:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:53.509 23:14:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:53.509 23:14:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:53.509 23:14:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:53.509 23:14:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:53.509 23:14:21 -- nvmf/common.sh@294 -- # net_devs=() 00:31:53.509 23:14:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:53.509 23:14:21 -- nvmf/common.sh@295 -- # e810=() 00:31:53.509 23:14:21 -- nvmf/common.sh@295 -- # local -ga e810 00:31:53.509 23:14:21 -- nvmf/common.sh@296 -- # x722=() 00:31:53.509 23:14:21 -- nvmf/common.sh@296 -- # local -ga x722 00:31:53.509 23:14:21 -- nvmf/common.sh@297 -- # mlx=() 00:31:53.509 23:14:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:53.509 23:14:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:53.509 23:14:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:53.509 23:14:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:53.509 23:14:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:53.509 23:14:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:53.509 23:14:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:53.509 23:14:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:53.509 23:14:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:53.509 23:14:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:53.509 23:14:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:53.509 23:14:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:53.509 23:14:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:53.509 23:14:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:53.509 23:14:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:53.509 23:14:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:53.509 23:14:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:53.509 23:14:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:53.509 23:14:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:53.509 23:14:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:53.509 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:53.509 23:14:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:53.509 23:14:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:53.509 23:14:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.509 23:14:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.509 23:14:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:53.509 23:14:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:53.510 23:14:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:53.510 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:53.510 23:14:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:53.510 23:14:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:53.510 23:14:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.510 23:14:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.510 23:14:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:53.510 23:14:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:53.510 23:14:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:53.510 23:14:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:53.510 23:14:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:53.510 23:14:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.510 23:14:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:53.510 23:14:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.510 23:14:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:53.510 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:53.510 23:14:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.510 23:14:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:53.510 23:14:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.510 23:14:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:53.510 23:14:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.510 23:14:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:53.510 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:53.510 23:14:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.510 23:14:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:53.510 23:14:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:53.510 23:14:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:53.510 23:14:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:53.510 23:14:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:53.510 23:14:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:53.510 23:14:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:53.510 23:14:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:53.510 23:14:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:53.510 23:14:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:53.510 23:14:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:53.510 23:14:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:53.510 23:14:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:53.510 23:14:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:53.510 23:14:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:53.510 23:14:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:53.510 23:14:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:53.510 23:14:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:53.771 23:14:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:53.771 23:14:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:53.771 23:14:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:53.771 23:14:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:53.771 23:14:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:53.771 23:14:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:53.771 23:14:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:53.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:53.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:31:53.771 00:31:53.771 --- 10.0.0.2 ping statistics --- 00:31:53.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.771 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:31:53.771 23:14:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:53.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:53.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.446 ms 00:31:53.771 00:31:53.771 --- 10.0.0.1 ping statistics --- 00:31:53.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.771 rtt min/avg/max/mdev = 0.446/0.446/0.446/0.000 ms 00:31:53.771 23:14:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:53.771 23:14:21 -- nvmf/common.sh@410 -- # return 0 00:31:53.771 23:14:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:53.771 23:14:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:53.771 23:14:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:53.771 23:14:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:53.771 23:14:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:53.771 23:14:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:53.771 23:14:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:53.771 23:14:21 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:53.771 23:14:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:53.771 23:14:21 -- common/autotest_common.sh@10 -- # set +x 00:31:53.771 23:14:21 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:53.771 23:14:21 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:53.771 23:14:21 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:53.771 23:14:21 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:53.771 23:14:21 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:53.771 23:14:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:53.771 23:14:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:53.771 23:14:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:53.771 23:14:21 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:53.771 23:14:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:54.032 23:14:22 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:54.032 23:14:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:31:54.032 23:14:22 -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:31:54.032 23:14:22 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:31:54.032 23:14:22 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:31:54.032 23:14:22 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:54.032 23:14:22 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:54.032 23:14:22 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:54.032 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.633 23:14:22 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:31:54.633 23:14:22 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:31:54.633 23:14:22 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:54.633 23:14:22 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:54.633 EAL: No free 2048 kB hugepages reported on node 1 00:31:54.893 23:14:22 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:31:54.893 23:14:22 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:54.893 23:14:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:54.893 23:14:22 -- common/autotest_common.sh@10 -- # set +x 00:31:54.893 23:14:22 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:54.893 23:14:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:54.893 23:14:23 -- common/autotest_common.sh@10 -- # set +x 00:31:54.893 23:14:23 -- target/identify_passthru.sh@31 -- # nvmfpid=120186 00:31:54.893 23:14:23 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:54.893 23:14:23 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:54.893 23:14:23 -- target/identify_passthru.sh@35 -- # waitforlisten 120186 00:31:54.893 23:14:23 -- common/autotest_common.sh@819 -- # '[' -z 120186 ']' 00:31:54.893 23:14:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.893 23:14:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:54.893 23:14:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.893 23:14:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:54.893 23:14:23 -- common/autotest_common.sh@10 -- # set +x 00:31:54.893 [2024-06-09 23:14:23.055735] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:31:54.893 [2024-06-09 23:14:23.055784] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.154 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.154 [2024-06-09 23:14:23.119479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:55.154 [2024-06-09 23:14:23.184095] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:55.154 [2024-06-09 23:14:23.184225] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.154 [2024-06-09 23:14:23.184239] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.154 [2024-06-09 23:14:23.184248] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.154 [2024-06-09 23:14:23.184367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.154 [2024-06-09 23:14:23.184499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:55.154 [2024-06-09 23:14:23.184598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.154 [2024-06-09 23:14:23.184599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:55.723 23:14:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:55.723 23:14:23 -- common/autotest_common.sh@852 -- # return 0 00:31:55.723 23:14:23 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:55.723 23:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.723 23:14:23 -- common/autotest_common.sh@10 -- # set +x 00:31:55.723 INFO: Log level set to 20 00:31:55.723 INFO: Requests: 00:31:55.723 { 00:31:55.723 "jsonrpc": "2.0", 00:31:55.723 "method": "nvmf_set_config", 00:31:55.723 "id": 1, 00:31:55.723 "params": { 00:31:55.723 "admin_cmd_passthru": { 00:31:55.723 "identify_ctrlr": true 00:31:55.723 } 00:31:55.723 } 00:31:55.723 } 00:31:55.723 00:31:55.723 INFO: response: 00:31:55.723 { 00:31:55.723 "jsonrpc": "2.0", 00:31:55.723 "id": 1, 00:31:55.723 "result": true 00:31:55.723 } 00:31:55.723 00:31:55.723 23:14:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.723 23:14:23 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:55.723 23:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.723 23:14:23 -- common/autotest_common.sh@10 -- # set +x 00:31:55.723 INFO: Setting log level to 20 00:31:55.723 INFO: Setting log level to 20 00:31:55.723 INFO: Log level set to 20 00:31:55.723 INFO: Log level set to 20 00:31:55.723 INFO: Requests: 00:31:55.723 { 00:31:55.723 "jsonrpc": "2.0", 00:31:55.723 "method": "framework_start_init", 00:31:55.723 "id": 1 00:31:55.723 } 00:31:55.723 00:31:55.723 INFO: Requests: 00:31:55.723 { 00:31:55.723 "jsonrpc": "2.0", 00:31:55.723 "method": "framework_start_init", 00:31:55.723 "id": 1 00:31:55.723 } 00:31:55.723 00:31:55.983 [2024-06-09 23:14:23.911816] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:55.983 INFO: response: 00:31:55.983 { 00:31:55.983 "jsonrpc": "2.0", 00:31:55.983 "id": 1, 00:31:55.983 "result": true 00:31:55.983 } 00:31:55.983 00:31:55.983 INFO: response: 00:31:55.983 { 00:31:55.983 "jsonrpc": "2.0", 00:31:55.983 "id": 1, 00:31:55.983 "result": true 00:31:55.983 } 00:31:55.983 00:31:55.983 23:14:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.983 23:14:23 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:55.983 23:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.983 23:14:23 -- common/autotest_common.sh@10 -- # set +x 00:31:55.983 INFO: Setting log level to 40 00:31:55.983 INFO: Setting log level to 40 00:31:55.983 INFO: Setting log level to 40 00:31:55.983 [2024-06-09 23:14:23.925052] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.983 23:14:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.983 23:14:23 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:55.983 23:14:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:55.983 23:14:23 -- common/autotest_common.sh@10 -- # set +x 00:31:55.983 23:14:23 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:31:55.983 23:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.983 23:14:23 -- common/autotest_common.sh@10 -- # set +x 00:31:56.243 Nvme0n1 00:31:56.243 23:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.243 23:14:24 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:56.243 23:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.243 23:14:24 -- common/autotest_common.sh@10 -- # set +x 00:31:56.243 23:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.243 23:14:24 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:56.243 23:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.243 23:14:24 -- common/autotest_common.sh@10 -- # set +x 00:31:56.243 23:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.243 23:14:24 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.243 23:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.243 23:14:24 -- common/autotest_common.sh@10 -- # set +x 00:31:56.243 [2024-06-09 23:14:24.309673] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.243 23:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.243 23:14:24 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:56.243 23:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.243 23:14:24 -- common/autotest_common.sh@10 -- # set +x 00:31:56.243 [2024-06-09 23:14:24.321455] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:56.243 [ 00:31:56.243 { 00:31:56.243 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:56.243 "subtype": "Discovery", 00:31:56.243 "listen_addresses": [], 00:31:56.243 "allow_any_host": true, 00:31:56.243 "hosts": [] 00:31:56.243 }, 00:31:56.243 { 00:31:56.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:56.243 "subtype": "NVMe", 00:31:56.243 "listen_addresses": [ 00:31:56.243 { 00:31:56.243 "transport": "TCP", 00:31:56.243 "trtype": "TCP", 00:31:56.243 "adrfam": "IPv4", 00:31:56.243 "traddr": "10.0.0.2", 00:31:56.243 "trsvcid": "4420" 00:31:56.243 } 00:31:56.243 ], 00:31:56.243 "allow_any_host": true, 00:31:56.243 "hosts": [], 00:31:56.243 "serial_number": "SPDK00000000000001", 00:31:56.243 "model_number": "SPDK bdev Controller", 00:31:56.243 "max_namespaces": 1, 00:31:56.243 "min_cntlid": 1, 00:31:56.243 "max_cntlid": 65519, 00:31:56.243 "namespaces": [ 00:31:56.243 { 00:31:56.243 "nsid": 1, 00:31:56.243 "bdev_name": "Nvme0n1", 00:31:56.243 "name": "Nvme0n1", 00:31:56.243 "nguid": "3634473052605487002538450000003E", 00:31:56.243 "uuid": "36344730-5260-5487-0025-38450000003e" 00:31:56.243 } 00:31:56.243 ] 00:31:56.243 } 00:31:56.243 ] 00:31:56.243 23:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.243 23:14:24 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:56.243 23:14:24 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:56.243 23:14:24 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:56.243 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.503 23:14:24 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:31:56.503 23:14:24 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:56.503 23:14:24 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:56.503 23:14:24 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:56.503 EAL: No free 2048 kB hugepages reported on node 1 00:31:56.763 23:14:24 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:31:56.763 23:14:24 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:31:56.763 23:14:24 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:31:56.763 23:14:24 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:56.763 23:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:56.763 23:14:24 -- common/autotest_common.sh@10 -- # set +x 00:31:56.763 23:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:56.763 23:14:24 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:56.763 23:14:24 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:56.763 23:14:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:56.763 23:14:24 -- nvmf/common.sh@116 -- # sync 00:31:56.763 23:14:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:56.763 23:14:24 -- nvmf/common.sh@119 -- # set +e 00:31:56.763 23:14:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:56.763 23:14:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:56.763 rmmod nvme_tcp 00:31:56.763 rmmod nvme_fabrics 00:31:56.763 rmmod nvme_keyring 00:31:56.763 23:14:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:56.763 23:14:24 -- nvmf/common.sh@123 -- # set -e 00:31:56.763 23:14:24 -- nvmf/common.sh@124 -- # return 0 00:31:56.763 23:14:24 -- nvmf/common.sh@477 -- # '[' -n 120186 ']' 00:31:56.763 23:14:24 -- nvmf/common.sh@478 -- # killprocess 120186 00:31:56.763 23:14:24 -- common/autotest_common.sh@926 -- # '[' -z 120186 ']' 00:31:56.763 23:14:24 -- common/autotest_common.sh@930 -- # kill -0 120186 00:31:56.763 23:14:24 -- common/autotest_common.sh@931 -- # uname 00:31:56.763 23:14:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:56.763 23:14:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120186 00:31:56.763 23:14:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:56.763 23:14:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:56.763 23:14:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120186' 00:31:56.763 killing process with pid 120186 00:31:56.763 23:14:24 -- common/autotest_common.sh@945 -- # kill 120186 00:31:56.763 [2024-06-09 23:14:24.870588] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:56.763 23:14:24 -- common/autotest_common.sh@950 -- # wait 120186 00:31:57.023 23:14:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:57.023 23:14:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:57.023 23:14:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:57.023 23:14:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:57.023 23:14:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:57.023 23:14:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.023 23:14:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:57.023 23:14:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.566 23:14:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:59.566 00:31:59.566 real 0m12.435s 00:31:59.566 user 0m10.059s 00:31:59.566 sys 0m5.884s 00:31:59.566 23:14:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:59.566 23:14:27 -- common/autotest_common.sh@10 -- # set +x 00:31:59.566 ************************************ 00:31:59.566 END TEST nvmf_identify_passthru 00:31:59.566 ************************************ 00:31:59.566 23:14:27 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:59.566 23:14:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:59.566 23:14:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:59.566 23:14:27 -- common/autotest_common.sh@10 -- # set +x 00:31:59.566 ************************************ 00:31:59.566 START TEST nvmf_dif 00:31:59.566 ************************************ 00:31:59.566 23:14:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:59.566 * Looking for test storage... 00:31:59.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:59.566 23:14:27 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.566 23:14:27 -- nvmf/common.sh@7 -- # uname -s 00:31:59.566 23:14:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.566 23:14:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.566 23:14:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.566 23:14:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.566 23:14:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.566 23:14:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.566 23:14:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.566 23:14:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.566 23:14:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.566 23:14:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.566 23:14:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:59.566 23:14:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:59.566 23:14:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.566 23:14:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.566 23:14:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.566 23:14:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.567 23:14:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.567 23:14:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.567 23:14:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.567 23:14:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.567 23:14:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.567 23:14:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.567 23:14:27 -- paths/export.sh@5 -- # export PATH 00:31:59.567 23:14:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.567 23:14:27 -- nvmf/common.sh@46 -- # : 0 00:31:59.567 23:14:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:59.567 23:14:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:59.567 23:14:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:59.567 23:14:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.567 23:14:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.567 23:14:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:59.567 23:14:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:59.567 23:14:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:59.567 23:14:27 -- target/dif.sh@15 -- # NULL_META=16 00:31:59.567 23:14:27 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:59.567 23:14:27 -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:59.567 23:14:27 -- target/dif.sh@15 -- # NULL_DIF=1 00:31:59.567 23:14:27 -- target/dif.sh@135 -- # nvmftestinit 00:31:59.567 23:14:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:59.567 23:14:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.567 23:14:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:59.567 23:14:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:59.567 23:14:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:59.567 23:14:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.567 23:14:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:59.567 23:14:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.567 23:14:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:59.567 23:14:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:59.567 23:14:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:59.567 23:14:27 -- common/autotest_common.sh@10 -- # set +x 00:32:06.157 23:14:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:06.157 23:14:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:06.157 23:14:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:06.157 23:14:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:06.157 23:14:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:06.157 23:14:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:06.157 23:14:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:06.157 23:14:33 -- nvmf/common.sh@294 -- # net_devs=() 00:32:06.157 23:14:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:06.157 23:14:33 -- nvmf/common.sh@295 -- # e810=() 00:32:06.157 23:14:33 -- nvmf/common.sh@295 -- # local -ga e810 00:32:06.157 23:14:33 -- nvmf/common.sh@296 -- # x722=() 00:32:06.157 23:14:33 -- nvmf/common.sh@296 -- # local -ga x722 00:32:06.157 23:14:33 -- nvmf/common.sh@297 -- # mlx=() 00:32:06.157 23:14:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:06.157 23:14:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.157 23:14:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.157 23:14:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.157 23:14:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.157 23:14:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.157 23:14:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.157 23:14:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.157 23:14:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.157 23:14:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.157 23:14:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.157 23:14:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.157 23:14:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:06.157 23:14:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:06.157 23:14:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:06.157 23:14:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:06.157 23:14:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:06.157 23:14:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:06.157 23:14:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:06.157 23:14:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:06.157 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:06.157 23:14:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:06.157 23:14:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:06.157 23:14:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.157 23:14:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.157 23:14:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:06.157 23:14:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:06.157 23:14:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:06.157 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:06.157 23:14:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:06.157 23:14:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:06.157 23:14:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.157 23:14:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.157 23:14:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:06.157 23:14:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:06.157 23:14:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:06.157 23:14:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:06.157 23:14:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:06.157 23:14:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.157 23:14:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:06.157 23:14:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.157 23:14:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:06.157 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:06.157 23:14:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.157 23:14:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:06.158 23:14:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.158 23:14:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:06.158 23:14:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.158 23:14:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:06.158 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:06.158 23:14:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.158 23:14:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:06.158 23:14:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:06.158 23:14:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:06.158 23:14:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:06.158 23:14:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:06.158 23:14:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.158 23:14:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.158 23:14:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.158 23:14:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:06.158 23:14:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.158 23:14:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.158 23:14:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:06.158 23:14:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.158 23:14:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.158 23:14:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:06.158 23:14:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:06.158 23:14:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.158 23:14:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:06.158 23:14:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:06.158 23:14:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.158 23:14:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:06.158 23:14:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.158 23:14:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.158 23:14:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.158 23:14:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:06.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:32:06.158 00:32:06.158 --- 10.0.0.2 ping statistics --- 00:32:06.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.158 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:32:06.158 23:14:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:32:06.158 00:32:06.158 --- 10.0.0.1 ping statistics --- 00:32:06.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.158 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:32:06.158 23:14:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.158 23:14:34 -- nvmf/common.sh@410 -- # return 0 00:32:06.158 23:14:34 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:32:06.158 23:14:34 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:09.462 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:09.462 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:09.462 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:09.462 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:09.462 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:09.462 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:09.462 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:09.462 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:09.462 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:32:09.462 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:32:09.462 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:32:09.462 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:32:09.462 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:32:09.462 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:32:09.462 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:32:09.462 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:32:09.462 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:32:09.724 23:14:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.724 23:14:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:09.724 23:14:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:09.724 23:14:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.724 23:14:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:09.724 23:14:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:09.724 23:14:37 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:09.724 23:14:37 -- target/dif.sh@137 -- # nvmfappstart 00:32:09.724 23:14:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:09.724 23:14:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:09.724 23:14:37 -- common/autotest_common.sh@10 -- # set +x 00:32:09.724 23:14:37 -- nvmf/common.sh@469 -- # nvmfpid=126078 00:32:09.724 23:14:37 -- nvmf/common.sh@470 -- # waitforlisten 126078 00:32:09.724 23:14:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:09.724 23:14:37 -- common/autotest_common.sh@819 -- # '[' -z 126078 ']' 00:32:09.724 23:14:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.724 23:14:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:09.724 23:14:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.724 23:14:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:09.724 23:14:37 -- common/autotest_common.sh@10 -- # set +x 00:32:09.724 [2024-06-09 23:14:37.886357] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:32:09.724 [2024-06-09 23:14:37.886418] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.985 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.985 [2024-06-09 23:14:37.952790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.985 [2024-06-09 23:14:38.019521] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:09.985 [2024-06-09 23:14:38.019644] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.985 [2024-06-09 23:14:38.019653] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.985 [2024-06-09 23:14:38.019659] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.985 [2024-06-09 23:14:38.019676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.558 23:14:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:10.558 23:14:38 -- common/autotest_common.sh@852 -- # return 0 00:32:10.558 23:14:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:10.558 23:14:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:10.558 23:14:38 -- common/autotest_common.sh@10 -- # set +x 00:32:10.558 23:14:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:10.558 23:14:38 -- target/dif.sh@139 -- # create_transport 00:32:10.558 23:14:38 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:10.558 23:14:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:10.558 23:14:38 -- common/autotest_common.sh@10 -- # set +x 00:32:10.558 [2024-06-09 23:14:38.682561] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.558 23:14:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:10.558 23:14:38 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:10.558 23:14:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:10.558 23:14:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:10.558 23:14:38 -- common/autotest_common.sh@10 -- # set +x 00:32:10.558 ************************************ 00:32:10.558 START TEST fio_dif_1_default 00:32:10.558 ************************************ 00:32:10.558 23:14:38 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:32:10.558 23:14:38 -- target/dif.sh@86 -- # create_subsystems 0 00:32:10.558 23:14:38 -- target/dif.sh@28 -- # local sub 00:32:10.558 23:14:38 -- target/dif.sh@30 -- # for sub in "$@" 00:32:10.558 23:14:38 -- target/dif.sh@31 -- # create_subsystem 0 00:32:10.559 23:14:38 -- target/dif.sh@18 -- # local sub_id=0 00:32:10.559 23:14:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:10.559 23:14:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:10.559 23:14:38 -- common/autotest_common.sh@10 -- # set +x 00:32:10.559 bdev_null0 00:32:10.559 23:14:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:10.559 23:14:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:10.559 23:14:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:10.559 23:14:38 -- common/autotest_common.sh@10 -- # set +x 00:32:10.559 23:14:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:10.559 23:14:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:10.559 23:14:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:10.559 23:14:38 -- common/autotest_common.sh@10 -- # set +x 00:32:10.559 23:14:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:10.559 23:14:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:10.559 23:14:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:10.559 23:14:38 -- common/autotest_common.sh@10 -- # set +x 00:32:10.820 [2024-06-09 23:14:38.738851] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.820 23:14:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:10.820 23:14:38 -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:10.820 23:14:38 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:10.820 23:14:38 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:10.820 23:14:38 -- nvmf/common.sh@520 -- # config=() 00:32:10.820 23:14:38 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:10.820 23:14:38 -- nvmf/common.sh@520 -- # local subsystem config 00:32:10.820 23:14:38 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:10.820 23:14:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:10.820 23:14:38 -- target/dif.sh@82 -- # gen_fio_conf 00:32:10.820 23:14:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:10.820 { 00:32:10.820 "params": { 00:32:10.820 "name": "Nvme$subsystem", 00:32:10.820 "trtype": "$TEST_TRANSPORT", 00:32:10.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:10.820 "adrfam": "ipv4", 00:32:10.820 "trsvcid": "$NVMF_PORT", 00:32:10.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:10.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:10.820 "hdgst": ${hdgst:-false}, 00:32:10.820 "ddgst": ${ddgst:-false} 00:32:10.820 }, 00:32:10.820 "method": "bdev_nvme_attach_controller" 00:32:10.820 } 00:32:10.820 EOF 00:32:10.820 )") 00:32:10.820 23:14:38 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:10.820 23:14:38 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:10.821 23:14:38 -- target/dif.sh@54 -- # local file 00:32:10.821 23:14:38 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:10.821 23:14:38 -- target/dif.sh@56 -- # cat 00:32:10.821 23:14:38 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:10.821 23:14:38 -- common/autotest_common.sh@1320 -- # shift 00:32:10.821 23:14:38 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:10.821 23:14:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:10.821 23:14:38 -- nvmf/common.sh@542 -- # cat 00:32:10.821 23:14:38 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:10.821 23:14:38 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:10.821 23:14:38 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:10.821 23:14:38 -- target/dif.sh@72 -- # (( file <= files )) 00:32:10.821 23:14:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:10.821 23:14:38 -- nvmf/common.sh@544 -- # jq . 00:32:10.821 23:14:38 -- nvmf/common.sh@545 -- # IFS=, 00:32:10.821 23:14:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:10.821 "params": { 00:32:10.821 "name": "Nvme0", 00:32:10.821 "trtype": "tcp", 00:32:10.821 "traddr": "10.0.0.2", 00:32:10.821 "adrfam": "ipv4", 00:32:10.821 "trsvcid": "4420", 00:32:10.821 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:10.821 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:10.821 "hdgst": false, 00:32:10.821 "ddgst": false 00:32:10.821 }, 00:32:10.821 "method": "bdev_nvme_attach_controller" 00:32:10.821 }' 00:32:10.821 23:14:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:10.821 23:14:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:10.821 23:14:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:10.821 23:14:38 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:10.821 23:14:38 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:10.821 23:14:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:10.821 23:14:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:10.821 23:14:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:10.821 23:14:38 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:10.821 23:14:38 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:11.083 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:11.083 fio-3.35 00:32:11.083 Starting 1 thread 00:32:11.083 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.657 [2024-06-09 23:14:39.610035] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:11.657 [2024-06-09 23:14:39.610075] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:21.693 00:32:21.693 filename0: (groupid=0, jobs=1): err= 0: pid=126610: Sun Jun 9 23:14:49 2024 00:32:21.693 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10003msec) 00:32:21.693 slat (nsec): min=5330, max=30582, avg=6364.01, stdev=1642.65 00:32:21.693 clat (usec): min=41842, max=45528, avg=42008.81, stdev=254.79 00:32:21.693 lat (usec): min=41850, max=45559, avg=42015.17, stdev=255.62 00:32:21.693 clat percentiles (usec): 00:32:21.693 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:32:21.693 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:21.693 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:21.693 | 99.00th=[42730], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:32:21.693 | 99.99th=[45351] 00:32:21.693 bw ( KiB/s): min= 352, max= 384, per=99.56%, avg=379.20, stdev=11.72, samples=20 00:32:21.693 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:32:21.693 lat (msec) : 50=100.00% 00:32:21.693 cpu : usr=95.99%, sys=3.81%, ctx=9, majf=0, minf=223 00:32:21.693 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:21.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.693 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.693 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:21.693 00:32:21.693 Run status group 0 (all jobs): 00:32:21.693 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3808KiB (3899kB), run=10003-10003msec 00:32:21.693 23:14:49 -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:21.693 23:14:49 -- target/dif.sh@43 -- # local sub 00:32:21.693 23:14:49 -- target/dif.sh@45 -- # for sub in "$@" 00:32:21.693 23:14:49 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:21.693 23:14:49 -- target/dif.sh@36 -- # local sub_id=0 00:32:21.693 23:14:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:21.693 23:14:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.693 23:14:49 -- common/autotest_common.sh@10 -- # set +x 00:32:21.693 23:14:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.693 23:14:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:21.693 23:14:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.693 23:14:49 -- common/autotest_common.sh@10 -- # set +x 00:32:21.955 23:14:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.955 00:32:21.955 real 0m11.183s 00:32:21.955 user 0m22.973s 00:32:21.955 sys 0m0.653s 00:32:21.955 23:14:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:21.955 23:14:49 -- common/autotest_common.sh@10 -- # set +x 00:32:21.955 ************************************ 00:32:21.955 END TEST fio_dif_1_default 00:32:21.955 ************************************ 00:32:21.955 23:14:49 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:21.955 23:14:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:21.955 23:14:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:21.955 23:14:49 -- common/autotest_common.sh@10 -- # set +x 00:32:21.955 ************************************ 00:32:21.955 START TEST fio_dif_1_multi_subsystems 00:32:21.955 ************************************ 00:32:21.955 23:14:49 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:32:21.955 23:14:49 -- target/dif.sh@92 -- # local files=1 00:32:21.955 23:14:49 -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:21.955 23:14:49 -- target/dif.sh@28 -- # local sub 00:32:21.955 23:14:49 -- target/dif.sh@30 -- # for sub in "$@" 00:32:21.955 23:14:49 -- target/dif.sh@31 -- # create_subsystem 0 00:32:21.955 23:14:49 -- target/dif.sh@18 -- # local sub_id=0 00:32:21.955 23:14:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:21.955 23:14:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.955 23:14:49 -- common/autotest_common.sh@10 -- # set +x 00:32:21.955 bdev_null0 00:32:21.955 23:14:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.955 23:14:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:21.955 23:14:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.955 23:14:49 -- common/autotest_common.sh@10 -- # set +x 00:32:21.955 23:14:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.955 23:14:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:21.955 23:14:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.955 23:14:49 -- common/autotest_common.sh@10 -- # set +x 00:32:21.955 23:14:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.955 23:14:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:21.955 23:14:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.955 23:14:49 -- common/autotest_common.sh@10 -- # set +x 00:32:21.955 [2024-06-09 23:14:49.969733] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.955 23:14:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.955 23:14:49 -- target/dif.sh@30 -- # for sub in "$@" 00:32:21.955 23:14:49 -- target/dif.sh@31 -- # create_subsystem 1 00:32:21.955 23:14:49 -- target/dif.sh@18 -- # local sub_id=1 00:32:21.955 23:14:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:21.956 23:14:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.956 23:14:49 -- common/autotest_common.sh@10 -- # set +x 00:32:21.956 bdev_null1 00:32:21.956 23:14:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.956 23:14:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:21.956 23:14:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.956 23:14:49 -- common/autotest_common.sh@10 -- # set +x 00:32:21.956 23:14:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.956 23:14:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:21.956 23:14:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.956 23:14:49 -- common/autotest_common.sh@10 -- # set +x 00:32:21.956 23:14:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.956 23:14:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:21.956 23:14:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:21.956 23:14:50 -- common/autotest_common.sh@10 -- # set +x 00:32:21.956 23:14:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:21.956 23:14:50 -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:21.956 23:14:50 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:21.956 23:14:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:21.956 23:14:50 -- nvmf/common.sh@520 -- # config=() 00:32:21.956 23:14:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:21.956 23:14:50 -- nvmf/common.sh@520 -- # local subsystem config 00:32:21.956 23:14:50 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:21.956 23:14:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:21.956 23:14:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:21.956 { 00:32:21.956 "params": { 00:32:21.956 "name": "Nvme$subsystem", 00:32:21.956 "trtype": "$TEST_TRANSPORT", 00:32:21.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:21.956 "adrfam": "ipv4", 00:32:21.956 "trsvcid": "$NVMF_PORT", 00:32:21.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:21.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:21.956 "hdgst": ${hdgst:-false}, 00:32:21.956 "ddgst": ${ddgst:-false} 00:32:21.956 }, 00:32:21.956 "method": "bdev_nvme_attach_controller" 00:32:21.956 } 00:32:21.956 EOF 00:32:21.956 )") 00:32:21.956 23:14:50 -- target/dif.sh@82 -- # gen_fio_conf 00:32:21.956 23:14:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:21.956 23:14:50 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:21.956 23:14:50 -- target/dif.sh@54 -- # local file 00:32:21.956 23:14:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:21.956 23:14:50 -- target/dif.sh@56 -- # cat 00:32:21.956 23:14:50 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:21.956 23:14:50 -- common/autotest_common.sh@1320 -- # shift 00:32:21.956 23:14:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:21.956 23:14:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:21.956 23:14:50 -- nvmf/common.sh@542 -- # cat 00:32:21.956 23:14:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:21.956 23:14:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:21.956 23:14:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:21.956 23:14:50 -- target/dif.sh@72 -- # (( file <= files )) 00:32:21.956 23:14:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:21.956 23:14:50 -- target/dif.sh@73 -- # cat 00:32:21.956 23:14:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:21.956 23:14:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:21.956 { 00:32:21.956 "params": { 00:32:21.956 "name": "Nvme$subsystem", 00:32:21.956 "trtype": "$TEST_TRANSPORT", 00:32:21.956 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:21.956 "adrfam": "ipv4", 00:32:21.956 "trsvcid": "$NVMF_PORT", 00:32:21.956 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:21.956 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:21.956 "hdgst": ${hdgst:-false}, 00:32:21.956 "ddgst": ${ddgst:-false} 00:32:21.956 }, 00:32:21.956 "method": "bdev_nvme_attach_controller" 00:32:21.956 } 00:32:21.956 EOF 00:32:21.956 )") 00:32:21.956 23:14:50 -- target/dif.sh@72 -- # (( file++ )) 00:32:21.956 23:14:50 -- target/dif.sh@72 -- # (( file <= files )) 00:32:21.956 23:14:50 -- nvmf/common.sh@542 -- # cat 00:32:21.956 23:14:50 -- nvmf/common.sh@544 -- # jq . 00:32:21.956 23:14:50 -- nvmf/common.sh@545 -- # IFS=, 00:32:21.956 23:14:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:21.956 "params": { 00:32:21.956 "name": "Nvme0", 00:32:21.956 "trtype": "tcp", 00:32:21.956 "traddr": "10.0.0.2", 00:32:21.956 "adrfam": "ipv4", 00:32:21.956 "trsvcid": "4420", 00:32:21.956 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:21.956 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:21.956 "hdgst": false, 00:32:21.956 "ddgst": false 00:32:21.956 }, 00:32:21.956 "method": "bdev_nvme_attach_controller" 00:32:21.956 },{ 00:32:21.956 "params": { 00:32:21.956 "name": "Nvme1", 00:32:21.956 "trtype": "tcp", 00:32:21.956 "traddr": "10.0.0.2", 00:32:21.956 "adrfam": "ipv4", 00:32:21.956 "trsvcid": "4420", 00:32:21.956 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:21.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:21.956 "hdgst": false, 00:32:21.956 "ddgst": false 00:32:21.956 }, 00:32:21.956 "method": "bdev_nvme_attach_controller" 00:32:21.956 }' 00:32:21.956 23:14:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:21.956 23:14:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:21.956 23:14:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:21.956 23:14:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:21.956 23:14:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:21.956 23:14:50 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:21.956 23:14:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:21.956 23:14:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:21.956 23:14:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:21.956 23:14:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:22.560 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:22.560 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:22.560 fio-3.35 00:32:22.560 Starting 2 threads 00:32:22.560 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.821 [2024-06-09 23:14:50.938950] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:22.821 [2024-06-09 23:14:50.938998] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:35.062 00:32:35.062 filename0: (groupid=0, jobs=1): err= 0: pid=128961: Sun Jun 9 23:15:01 2024 00:32:35.062 read: IOPS=181, BW=724KiB/s (742kB/s)(7264KiB/10031msec) 00:32:35.062 slat (nsec): min=5374, max=30790, avg=6592.00, stdev=1832.56 00:32:35.062 clat (usec): min=1605, max=44503, avg=22075.34, stdev=20131.94 00:32:35.062 lat (usec): min=1611, max=44533, avg=22081.93, stdev=20131.99 00:32:35.062 clat percentiles (usec): 00:32:35.062 | 1.00th=[ 1762], 5.00th=[ 1811], 10.00th=[ 1827], 20.00th=[ 1860], 00:32:35.062 | 30.00th=[ 1876], 40.00th=[ 1893], 50.00th=[41681], 60.00th=[42206], 00:32:35.062 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:35.062 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:32:35.063 | 99.99th=[44303] 00:32:35.063 bw ( KiB/s): min= 672, max= 768, per=50.10%, avg=724.80, stdev=31.62, samples=20 00:32:35.063 iops : min= 168, max= 192, avg=181.20, stdev= 7.90, samples=20 00:32:35.063 lat (msec) : 2=49.78%, 50=50.22% 00:32:35.063 cpu : usr=96.98%, sys=2.80%, ctx=13, majf=0, minf=259 00:32:35.063 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:35.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.063 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:35.063 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:35.063 filename1: (groupid=0, jobs=1): err= 0: pid=128963: Sun Jun 9 23:15:01 2024 00:32:35.063 read: IOPS=180, BW=721KiB/s (739kB/s)(7232KiB/10027msec) 00:32:35.063 slat (nsec): min=5375, max=31279, avg=6369.07, stdev=1636.24 00:32:35.063 clat (usec): min=1375, max=44462, avg=22164.93, stdev=20129.97 00:32:35.063 lat (usec): min=1380, max=44493, avg=22171.30, stdev=20130.03 00:32:35.063 clat percentiles (usec): 00:32:35.063 | 1.00th=[ 1631], 5.00th=[ 1795], 10.00th=[ 1827], 20.00th=[ 1860], 00:32:35.063 | 30.00th=[ 1893], 40.00th=[ 1909], 50.00th=[41681], 60.00th=[42206], 00:32:35.063 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:35.063 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:32:35.063 | 99.99th=[44303] 00:32:35.063 bw ( KiB/s): min= 672, max= 768, per=49.89%, avg=721.60, stdev=31.96, samples=20 00:32:35.063 iops : min= 168, max= 192, avg=180.40, stdev= 7.99, samples=20 00:32:35.063 lat (msec) : 2=49.39%, 4=0.17%, 50=50.44% 00:32:35.063 cpu : usr=97.42%, sys=2.37%, ctx=15, majf=0, minf=94 00:32:35.063 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:35.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.063 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:35.063 issued rwts: total=1808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:35.063 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:35.063 00:32:35.063 Run status group 0 (all jobs): 00:32:35.063 READ: bw=1445KiB/s (1480kB/s), 721KiB/s-724KiB/s (739kB/s-742kB/s), io=14.2MiB (14.8MB), run=10027-10031msec 00:32:35.063 23:15:01 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:35.063 23:15:01 -- target/dif.sh@43 -- # local sub 00:32:35.063 23:15:01 -- target/dif.sh@45 -- # for sub in "$@" 00:32:35.063 23:15:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:35.063 23:15:01 -- target/dif.sh@36 -- # local sub_id=0 00:32:35.063 23:15:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:35.063 23:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:35.063 23:15:01 -- common/autotest_common.sh@10 -- # set +x 00:32:35.063 23:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:35.063 23:15:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:35.063 23:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:35.063 23:15:01 -- common/autotest_common.sh@10 -- # set +x 00:32:35.063 23:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:35.063 23:15:01 -- target/dif.sh@45 -- # for sub in "$@" 00:32:35.063 23:15:01 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:35.063 23:15:01 -- target/dif.sh@36 -- # local sub_id=1 00:32:35.063 23:15:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:35.063 23:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:35.063 23:15:01 -- common/autotest_common.sh@10 -- # set +x 00:32:35.063 23:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:35.063 23:15:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:35.063 23:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:35.063 23:15:01 -- common/autotest_common.sh@10 -- # set +x 00:32:35.063 23:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:35.063 00:32:35.063 real 0m11.349s 00:32:35.063 user 0m37.055s 00:32:35.063 sys 0m0.882s 00:32:35.063 23:15:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:35.063 23:15:01 -- common/autotest_common.sh@10 -- # set +x 00:32:35.063 ************************************ 00:32:35.063 END TEST fio_dif_1_multi_subsystems 00:32:35.063 ************************************ 00:32:35.063 23:15:01 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:35.063 23:15:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:35.063 23:15:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:35.063 23:15:01 -- common/autotest_common.sh@10 -- # set +x 00:32:35.063 ************************************ 00:32:35.063 START TEST fio_dif_rand_params 00:32:35.063 ************************************ 00:32:35.063 23:15:01 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:32:35.063 23:15:01 -- target/dif.sh@100 -- # local NULL_DIF 00:32:35.063 23:15:01 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:35.063 23:15:01 -- target/dif.sh@103 -- # NULL_DIF=3 00:32:35.063 23:15:01 -- target/dif.sh@103 -- # bs=128k 00:32:35.063 23:15:01 -- target/dif.sh@103 -- # numjobs=3 00:32:35.063 23:15:01 -- target/dif.sh@103 -- # iodepth=3 00:32:35.063 23:15:01 -- target/dif.sh@103 -- # runtime=5 00:32:35.063 23:15:01 -- target/dif.sh@105 -- # create_subsystems 0 00:32:35.063 23:15:01 -- target/dif.sh@28 -- # local sub 00:32:35.063 23:15:01 -- target/dif.sh@30 -- # for sub in "$@" 00:32:35.063 23:15:01 -- target/dif.sh@31 -- # create_subsystem 0 00:32:35.063 23:15:01 -- target/dif.sh@18 -- # local sub_id=0 00:32:35.063 23:15:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:35.063 23:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:35.063 23:15:01 -- common/autotest_common.sh@10 -- # set +x 00:32:35.063 bdev_null0 00:32:35.063 23:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:35.063 23:15:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:35.063 23:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:35.064 23:15:01 -- common/autotest_common.sh@10 -- # set +x 00:32:35.064 23:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:35.064 23:15:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:35.064 23:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:35.064 23:15:01 -- common/autotest_common.sh@10 -- # set +x 00:32:35.064 23:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:35.064 23:15:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:35.064 23:15:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:35.064 23:15:01 -- common/autotest_common.sh@10 -- # set +x 00:32:35.064 [2024-06-09 23:15:01.365854] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:35.064 23:15:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:35.064 23:15:01 -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:35.064 23:15:01 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:35.064 23:15:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:35.064 23:15:01 -- nvmf/common.sh@520 -- # config=() 00:32:35.064 23:15:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:35.064 23:15:01 -- nvmf/common.sh@520 -- # local subsystem config 00:32:35.064 23:15:01 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:35.064 23:15:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:35.064 23:15:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:35.064 { 00:32:35.064 "params": { 00:32:35.064 "name": "Nvme$subsystem", 00:32:35.064 "trtype": "$TEST_TRANSPORT", 00:32:35.064 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:35.064 "adrfam": "ipv4", 00:32:35.064 "trsvcid": "$NVMF_PORT", 00:32:35.064 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:35.064 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:35.064 "hdgst": ${hdgst:-false}, 00:32:35.064 "ddgst": ${ddgst:-false} 00:32:35.064 }, 00:32:35.064 "method": "bdev_nvme_attach_controller" 00:32:35.064 } 00:32:35.064 EOF 00:32:35.064 )") 00:32:35.064 23:15:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:35.064 23:15:01 -- target/dif.sh@82 -- # gen_fio_conf 00:32:35.064 23:15:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:35.064 23:15:01 -- target/dif.sh@54 -- # local file 00:32:35.064 23:15:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:35.064 23:15:01 -- target/dif.sh@56 -- # cat 00:32:35.064 23:15:01 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:35.064 23:15:01 -- common/autotest_common.sh@1320 -- # shift 00:32:35.064 23:15:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:35.064 23:15:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:35.064 23:15:01 -- nvmf/common.sh@542 -- # cat 00:32:35.064 23:15:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:35.064 23:15:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:35.064 23:15:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:35.064 23:15:01 -- target/dif.sh@72 -- # (( file <= files )) 00:32:35.064 23:15:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:35.064 23:15:01 -- nvmf/common.sh@544 -- # jq . 00:32:35.064 23:15:01 -- nvmf/common.sh@545 -- # IFS=, 00:32:35.064 23:15:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:35.064 "params": { 00:32:35.064 "name": "Nvme0", 00:32:35.064 "trtype": "tcp", 00:32:35.064 "traddr": "10.0.0.2", 00:32:35.064 "adrfam": "ipv4", 00:32:35.064 "trsvcid": "4420", 00:32:35.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:35.064 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:35.064 "hdgst": false, 00:32:35.064 "ddgst": false 00:32:35.064 }, 00:32:35.065 "method": "bdev_nvme_attach_controller" 00:32:35.065 }' 00:32:35.065 23:15:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:35.065 23:15:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:35.065 23:15:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:35.065 23:15:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:35.065 23:15:01 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:35.065 23:15:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:35.065 23:15:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:35.065 23:15:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:35.065 23:15:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:35.065 23:15:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:35.065 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:35.065 ... 00:32:35.065 fio-3.35 00:32:35.065 Starting 3 threads 00:32:35.065 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.065 [2024-06-09 23:15:02.214195] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:35.065 [2024-06-09 23:15:02.214242] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:39.277 00:32:39.277 filename0: (groupid=0, jobs=1): err= 0: pid=131476: Sun Jun 9 23:15:07 2024 00:32:39.277 read: IOPS=85, BW=10.7MiB/s (11.2MB/s)(53.6MiB/5017msec) 00:32:39.277 slat (nsec): min=5364, max=33191, avg=7469.51, stdev=1869.46 00:32:39.278 clat (usec): min=8777, max=59738, avg=35066.39, stdev=21317.90 00:32:39.278 lat (usec): min=8785, max=59746, avg=35073.86, stdev=21318.04 00:32:39.278 clat percentiles (usec): 00:32:39.278 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[11863], 20.00th=[13042], 00:32:39.278 | 30.00th=[13829], 40.00th=[15270], 50.00th=[50594], 60.00th=[54264], 00:32:39.278 | 70.00th=[55313], 80.00th=[56886], 90.00th=[57410], 95.00th=[57934], 00:32:39.278 | 99.00th=[58983], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:32:39.278 | 99.99th=[59507] 00:32:39.278 bw ( KiB/s): min= 8448, max=14592, per=28.96%, avg=10905.60, stdev=1802.94, samples=10 00:32:39.278 iops : min= 66, max= 114, avg=85.20, stdev=14.09, samples=10 00:32:39.278 lat (msec) : 10=2.56%, 20=46.39%, 100=51.05% 00:32:39.278 cpu : usr=97.29%, sys=2.39%, ctx=7, majf=0, minf=91 00:32:39.278 IO depths : 1=9.6%, 2=90.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:39.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.278 issued rwts: total=429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:39.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:39.278 filename0: (groupid=0, jobs=1): err= 0: pid=131477: Sun Jun 9 23:15:07 2024 00:32:39.278 read: IOPS=130, BW=16.3MiB/s (17.1MB/s)(81.6MiB/5019msec) 00:32:39.278 slat (nsec): min=5369, max=41311, avg=6873.60, stdev=2254.44 00:32:39.278 clat (usec): min=7376, max=94854, avg=23031.61, stdev=20055.92 00:32:39.278 lat (usec): min=7381, max=94859, avg=23038.48, stdev=20056.29 00:32:39.278 clat percentiles (usec): 00:32:39.278 | 1.00th=[ 7635], 5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[ 9372], 00:32:39.278 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11994], 60.00th=[13304], 00:32:39.278 | 70.00th=[15401], 80.00th=[53740], 90.00th=[55837], 95.00th=[57410], 00:32:39.278 | 99.00th=[58459], 99.50th=[65274], 99.90th=[94897], 99.95th=[94897], 00:32:39.278 | 99.99th=[94897] 00:32:39.278 bw ( KiB/s): min= 8448, max=24832, per=44.19%, avg=16640.00, stdev=5261.68, samples=10 00:32:39.278 iops : min= 66, max= 194, avg=130.00, stdev=41.11, samples=10 00:32:39.278 lat (msec) : 10=28.64%, 20=44.10%, 50=2.14%, 100=25.11% 00:32:39.278 cpu : usr=96.71%, sys=2.93%, ctx=12, majf=0, minf=149 00:32:39.278 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:39.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.278 issued rwts: total=653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:39.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:39.278 filename0: (groupid=0, jobs=1): err= 0: pid=131478: Sun Jun 9 23:15:07 2024 00:32:39.278 read: IOPS=78, BW=9.85MiB/s (10.3MB/s)(49.5MiB/5024msec) 00:32:39.278 slat (nsec): min=5349, max=29758, avg=7358.12, stdev=2062.59 00:32:39.278 clat (usec): min=7323, max=53702, avg=38040.19, stdev=19134.47 00:32:39.278 lat (usec): min=7328, max=53711, avg=38047.55, stdev=19134.69 00:32:39.278 clat percentiles (usec): 00:32:39.278 | 1.00th=[ 7504], 5.00th=[ 8029], 10.00th=[ 8848], 20.00th=[ 9896], 00:32:39.278 | 30.00th=[11994], 40.00th=[50070], 50.00th=[50594], 60.00th=[50594], 00:32:39.278 | 70.00th=[51119], 80.00th=[51119], 90.00th=[51643], 95.00th=[52167], 00:32:39.278 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:32:39.278 | 99.99th=[53740] 00:32:39.278 bw ( KiB/s): min= 6912, max=12288, per=26.72%, avg=10060.80, stdev=1637.20, samples=10 00:32:39.278 iops : min= 54, max= 96, avg=78.60, stdev=12.79, samples=10 00:32:39.278 lat (msec) : 10=21.46%, 20=9.60%, 50=8.84%, 100=60.10% 00:32:39.278 cpu : usr=97.91%, sys=1.81%, ctx=8, majf=0, minf=71 00:32:39.278 IO depths : 1=22.0%, 2=78.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:39.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.278 issued rwts: total=396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:39.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:39.278 00:32:39.278 Run status group 0 (all jobs): 00:32:39.278 READ: bw=36.8MiB/s (38.6MB/s), 9.85MiB/s-16.3MiB/s (10.3MB/s-17.1MB/s), io=185MiB (194MB), run=5017-5024msec 00:32:39.540 23:15:07 -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:39.540 23:15:07 -- target/dif.sh@43 -- # local sub 00:32:39.540 23:15:07 -- target/dif.sh@45 -- # for sub in "$@" 00:32:39.540 23:15:07 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:39.540 23:15:07 -- target/dif.sh@36 -- # local sub_id=0 00:32:39.540 23:15:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:39.540 23:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.540 23:15:07 -- common/autotest_common.sh@10 -- # set +x 00:32:39.540 23:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.540 23:15:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:39.540 23:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.540 23:15:07 -- common/autotest_common.sh@10 -- # set +x 00:32:39.540 23:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.540 23:15:07 -- target/dif.sh@109 -- # NULL_DIF=2 00:32:39.540 23:15:07 -- target/dif.sh@109 -- # bs=4k 00:32:39.540 23:15:07 -- target/dif.sh@109 -- # numjobs=8 00:32:39.540 23:15:07 -- target/dif.sh@109 -- # iodepth=16 00:32:39.540 23:15:07 -- target/dif.sh@109 -- # runtime= 00:32:39.540 23:15:07 -- target/dif.sh@109 -- # files=2 00:32:39.540 23:15:07 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:39.540 23:15:07 -- target/dif.sh@28 -- # local sub 00:32:39.540 23:15:07 -- target/dif.sh@30 -- # for sub in "$@" 00:32:39.540 23:15:07 -- target/dif.sh@31 -- # create_subsystem 0 00:32:39.540 23:15:07 -- target/dif.sh@18 -- # local sub_id=0 00:32:39.540 23:15:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:39.540 23:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.540 23:15:07 -- common/autotest_common.sh@10 -- # set +x 00:32:39.540 bdev_null0 00:32:39.540 23:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.541 23:15:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:39.541 23:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.541 23:15:07 -- common/autotest_common.sh@10 -- # set +x 00:32:39.541 23:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.541 23:15:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:39.541 23:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.541 23:15:07 -- common/autotest_common.sh@10 -- # set +x 00:32:39.541 23:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.541 23:15:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:39.541 23:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.541 23:15:07 -- common/autotest_common.sh@10 -- # set +x 00:32:39.541 [2024-06-09 23:15:07.531488] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.541 23:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.541 23:15:07 -- target/dif.sh@30 -- # for sub in "$@" 00:32:39.541 23:15:07 -- target/dif.sh@31 -- # create_subsystem 1 00:32:39.541 23:15:07 -- target/dif.sh@18 -- # local sub_id=1 00:32:39.541 23:15:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:39.541 23:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.541 23:15:07 -- common/autotest_common.sh@10 -- # set +x 00:32:39.541 bdev_null1 00:32:39.541 23:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.541 23:15:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:39.541 23:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.541 23:15:07 -- common/autotest_common.sh@10 -- # set +x 00:32:39.541 23:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.541 23:15:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:39.541 23:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.541 23:15:07 -- common/autotest_common.sh@10 -- # set +x 00:32:39.541 23:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.541 23:15:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:39.541 23:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.541 23:15:07 -- common/autotest_common.sh@10 -- # set +x 00:32:39.541 23:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.541 23:15:07 -- target/dif.sh@30 -- # for sub in "$@" 00:32:39.541 23:15:07 -- target/dif.sh@31 -- # create_subsystem 2 00:32:39.541 23:15:07 -- target/dif.sh@18 -- # local sub_id=2 00:32:39.541 23:15:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:39.541 23:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.541 23:15:07 -- common/autotest_common.sh@10 -- # set +x 00:32:39.541 bdev_null2 00:32:39.541 23:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.541 23:15:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:39.541 23:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.541 23:15:07 -- common/autotest_common.sh@10 -- # set +x 00:32:39.541 23:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.541 23:15:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:39.541 23:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.541 23:15:07 -- common/autotest_common.sh@10 -- # set +x 00:32:39.541 23:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.541 23:15:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:39.541 23:15:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:39.541 23:15:07 -- common/autotest_common.sh@10 -- # set +x 00:32:39.541 23:15:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:39.541 23:15:07 -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:39.541 23:15:07 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:39.541 23:15:07 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:39.541 23:15:07 -- nvmf/common.sh@520 -- # config=() 00:32:39.541 23:15:07 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:39.541 23:15:07 -- nvmf/common.sh@520 -- # local subsystem config 00:32:39.541 23:15:07 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:39.541 23:15:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:39.541 23:15:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:39.541 { 00:32:39.541 "params": { 00:32:39.541 "name": "Nvme$subsystem", 00:32:39.541 "trtype": "$TEST_TRANSPORT", 00:32:39.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:39.541 "adrfam": "ipv4", 00:32:39.541 "trsvcid": "$NVMF_PORT", 00:32:39.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:39.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:39.541 "hdgst": ${hdgst:-false}, 00:32:39.541 "ddgst": ${ddgst:-false} 00:32:39.541 }, 00:32:39.541 "method": "bdev_nvme_attach_controller" 00:32:39.541 } 00:32:39.541 EOF 00:32:39.541 )") 00:32:39.541 23:15:07 -- target/dif.sh@82 -- # gen_fio_conf 00:32:39.541 23:15:07 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:39.541 23:15:07 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:39.541 23:15:07 -- target/dif.sh@54 -- # local file 00:32:39.541 23:15:07 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:39.541 23:15:07 -- target/dif.sh@56 -- # cat 00:32:39.541 23:15:07 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:39.541 23:15:07 -- common/autotest_common.sh@1320 -- # shift 00:32:39.541 23:15:07 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:39.541 23:15:07 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:39.541 23:15:07 -- nvmf/common.sh@542 -- # cat 00:32:39.541 23:15:07 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:39.541 23:15:07 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:39.541 23:15:07 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:39.541 23:15:07 -- target/dif.sh@72 -- # (( file <= files )) 00:32:39.541 23:15:07 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:39.541 23:15:07 -- target/dif.sh@73 -- # cat 00:32:39.541 23:15:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:39.541 23:15:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:39.541 { 00:32:39.541 "params": { 00:32:39.541 "name": "Nvme$subsystem", 00:32:39.541 "trtype": "$TEST_TRANSPORT", 00:32:39.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:39.541 "adrfam": "ipv4", 00:32:39.541 "trsvcid": "$NVMF_PORT", 00:32:39.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:39.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:39.541 "hdgst": ${hdgst:-false}, 00:32:39.541 "ddgst": ${ddgst:-false} 00:32:39.541 }, 00:32:39.541 "method": "bdev_nvme_attach_controller" 00:32:39.541 } 00:32:39.541 EOF 00:32:39.541 )") 00:32:39.541 23:15:07 -- target/dif.sh@72 -- # (( file++ )) 00:32:39.541 23:15:07 -- target/dif.sh@72 -- # (( file <= files )) 00:32:39.541 23:15:07 -- target/dif.sh@73 -- # cat 00:32:39.541 23:15:07 -- nvmf/common.sh@542 -- # cat 00:32:39.541 23:15:07 -- target/dif.sh@72 -- # (( file++ )) 00:32:39.541 23:15:07 -- target/dif.sh@72 -- # (( file <= files )) 00:32:39.541 23:15:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:39.541 23:15:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:39.541 { 00:32:39.541 "params": { 00:32:39.541 "name": "Nvme$subsystem", 00:32:39.541 "trtype": "$TEST_TRANSPORT", 00:32:39.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:39.541 "adrfam": "ipv4", 00:32:39.541 "trsvcid": "$NVMF_PORT", 00:32:39.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:39.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:39.541 "hdgst": ${hdgst:-false}, 00:32:39.541 "ddgst": ${ddgst:-false} 00:32:39.541 }, 00:32:39.541 "method": "bdev_nvme_attach_controller" 00:32:39.541 } 00:32:39.541 EOF 00:32:39.541 )") 00:32:39.541 23:15:07 -- nvmf/common.sh@542 -- # cat 00:32:39.541 23:15:07 -- nvmf/common.sh@544 -- # jq . 00:32:39.541 23:15:07 -- nvmf/common.sh@545 -- # IFS=, 00:32:39.541 23:15:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:39.541 "params": { 00:32:39.541 "name": "Nvme0", 00:32:39.541 "trtype": "tcp", 00:32:39.541 "traddr": "10.0.0.2", 00:32:39.541 "adrfam": "ipv4", 00:32:39.541 "trsvcid": "4420", 00:32:39.541 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:39.541 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:39.541 "hdgst": false, 00:32:39.541 "ddgst": false 00:32:39.541 }, 00:32:39.541 "method": "bdev_nvme_attach_controller" 00:32:39.541 },{ 00:32:39.541 "params": { 00:32:39.541 "name": "Nvme1", 00:32:39.541 "trtype": "tcp", 00:32:39.541 "traddr": "10.0.0.2", 00:32:39.541 "adrfam": "ipv4", 00:32:39.541 "trsvcid": "4420", 00:32:39.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:39.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:39.541 "hdgst": false, 00:32:39.541 "ddgst": false 00:32:39.541 }, 00:32:39.541 "method": "bdev_nvme_attach_controller" 00:32:39.541 },{ 00:32:39.541 "params": { 00:32:39.541 "name": "Nvme2", 00:32:39.541 "trtype": "tcp", 00:32:39.541 "traddr": "10.0.0.2", 00:32:39.541 "adrfam": "ipv4", 00:32:39.541 "trsvcid": "4420", 00:32:39.541 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:39.541 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:39.541 "hdgst": false, 00:32:39.541 "ddgst": false 00:32:39.541 }, 00:32:39.541 "method": "bdev_nvme_attach_controller" 00:32:39.541 }' 00:32:39.541 23:15:07 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:39.541 23:15:07 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:39.541 23:15:07 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:39.541 23:15:07 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:39.541 23:15:07 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:39.541 23:15:07 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:39.541 23:15:07 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:39.542 23:15:07 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:39.542 23:15:07 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:39.542 23:15:07 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:40.149 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:40.149 ... 00:32:40.149 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:40.149 ... 00:32:40.149 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:40.149 ... 00:32:40.149 fio-3.35 00:32:40.149 Starting 24 threads 00:32:40.149 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.722 [2024-06-09 23:15:08.784173] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:40.722 [2024-06-09 23:15:08.784218] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:52.963 00:32:52.963 filename0: (groupid=0, jobs=1): err= 0: pid=132861: Sun Jun 9 23:15:18 2024 00:32:52.963 read: IOPS=526, BW=2106KiB/s (2157kB/s)(20.6MiB/10004msec) 00:32:52.963 slat (usec): min=5, max=110, avg=11.58, stdev= 9.67 00:32:52.963 clat (usec): min=3578, max=58683, avg=30324.31, stdev=6634.95 00:32:52.963 lat (usec): min=3588, max=58693, avg=30335.89, stdev=6635.41 00:32:52.963 clat percentiles (usec): 00:32:52.963 | 1.00th=[ 8979], 5.00th=[20055], 10.00th=[25560], 20.00th=[27132], 00:32:52.963 | 30.00th=[27919], 40.00th=[28705], 50.00th=[29230], 60.00th=[29754], 00:32:52.963 | 70.00th=[30540], 80.00th=[35390], 90.00th=[38536], 95.00th=[41681], 00:32:52.963 | 99.00th=[50070], 99.50th=[54789], 99.90th=[57934], 99.95th=[57934], 00:32:52.963 | 99.99th=[58459] 00:32:52.963 bw ( KiB/s): min= 1888, max= 2436, per=4.20%, avg=2110.11, stdev=120.62, samples=19 00:32:52.963 iops : min= 472, max= 609, avg=527.53, stdev=30.16, samples=19 00:32:52.963 lat (msec) : 4=0.13%, 10=1.06%, 20=3.82%, 50=93.91%, 100=1.08% 00:32:52.963 cpu : usr=98.88%, sys=0.76%, ctx=23, majf=0, minf=9 00:32:52.963 IO depths : 1=0.9%, 2=2.0%, 4=9.9%, 8=74.2%, 16=13.0%, 32=0.0%, >=64=0.0% 00:32:52.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.963 complete : 0=0.0%, 4=90.6%, 8=5.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.963 issued rwts: total=5267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.963 filename0: (groupid=0, jobs=1): err= 0: pid=132862: Sun Jun 9 23:15:18 2024 00:32:52.963 read: IOPS=505, BW=2023KiB/s (2071kB/s)(19.8MiB/10012msec) 00:32:52.963 slat (nsec): min=5374, max=90204, avg=12428.10, stdev=9828.28 00:32:52.963 clat (usec): min=13841, max=56756, avg=31563.94, stdev=6308.96 00:32:52.963 lat (usec): min=13847, max=56766, avg=31576.37, stdev=6309.34 00:32:52.963 clat percentiles (usec): 00:32:52.963 | 1.00th=[17171], 5.00th=[24773], 10.00th=[26346], 20.00th=[27657], 00:32:52.963 | 30.00th=[28181], 40.00th=[28967], 50.00th=[29492], 60.00th=[30016], 00:32:52.963 | 70.00th=[34341], 80.00th=[36963], 90.00th=[40109], 95.00th=[43779], 00:32:52.963 | 99.00th=[52167], 99.50th=[54789], 99.90th=[55837], 99.95th=[55837], 00:32:52.963 | 99.99th=[56886] 00:32:52.963 bw ( KiB/s): min= 1864, max= 2256, per=4.02%, avg=2021.05, stdev=94.55, samples=19 00:32:52.963 iops : min= 466, max= 564, avg=505.26, stdev=23.64, samples=19 00:32:52.963 lat (msec) : 20=2.11%, 50=96.70%, 100=1.19% 00:32:52.963 cpu : usr=99.00%, sys=0.65%, ctx=15, majf=0, minf=11 00:32:52.963 IO depths : 1=0.2%, 2=0.7%, 4=8.4%, 8=76.4%, 16=14.2%, 32=0.0%, >=64=0.0% 00:32:52.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.963 complete : 0=0.0%, 4=90.6%, 8=5.4%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.963 issued rwts: total=5063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.963 filename0: (groupid=0, jobs=1): err= 0: pid=132863: Sun Jun 9 23:15:18 2024 00:32:52.963 read: IOPS=513, BW=2052KiB/s (2101kB/s)(20.1MiB/10009msec) 00:32:52.963 slat (nsec): min=5379, max=95975, avg=13615.35, stdev=10709.96 00:32:52.963 clat (usec): min=11259, max=63972, avg=31106.73, stdev=6722.54 00:32:52.963 lat (usec): min=11265, max=63995, avg=31120.35, stdev=6722.74 00:32:52.963 clat percentiles (usec): 00:32:52.963 | 1.00th=[15664], 5.00th=[20317], 10.00th=[26084], 20.00th=[27395], 00:32:52.963 | 30.00th=[27919], 40.00th=[28705], 50.00th=[29230], 60.00th=[30016], 00:32:52.963 | 70.00th=[32375], 80.00th=[36963], 90.00th=[40109], 95.00th=[43254], 00:32:52.963 | 99.00th=[49546], 99.50th=[56361], 99.90th=[63701], 99.95th=[63701], 00:32:52.963 | 99.99th=[64226] 00:32:52.963 bw ( KiB/s): min= 1792, max= 2176, per=4.08%, avg=2051.79, stdev=100.18, samples=19 00:32:52.963 iops : min= 448, max= 544, avg=512.95, stdev=25.05, samples=19 00:32:52.963 lat (msec) : 20=4.30%, 50=94.76%, 100=0.93% 00:32:52.963 cpu : usr=98.97%, sys=0.67%, ctx=16, majf=0, minf=9 00:32:52.963 IO depths : 1=0.4%, 2=0.9%, 4=8.5%, 8=76.2%, 16=14.0%, 32=0.0%, >=64=0.0% 00:32:52.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.963 complete : 0=0.0%, 4=90.3%, 8=5.9%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.963 issued rwts: total=5135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.963 filename0: (groupid=0, jobs=1): err= 0: pid=132864: Sun Jun 9 23:15:18 2024 00:32:52.963 read: IOPS=556, BW=2226KiB/s (2280kB/s)(21.8MiB/10017msec) 00:32:52.963 slat (nsec): min=5375, max=90228, avg=9971.98, stdev=8286.26 00:32:52.963 clat (usec): min=12399, max=56433, avg=28676.57, stdev=5334.91 00:32:52.963 lat (usec): min=12407, max=56439, avg=28686.55, stdev=5335.99 00:32:52.963 clat percentiles (usec): 00:32:52.963 | 1.00th=[15795], 5.00th=[18482], 10.00th=[20841], 20.00th=[26870], 00:32:52.963 | 30.00th=[27657], 40.00th=[27919], 50.00th=[28443], 60.00th=[28967], 00:32:52.963 | 70.00th=[29492], 80.00th=[30278], 90.00th=[35390], 95.00th=[39060], 00:32:52.963 | 99.00th=[44303], 99.50th=[47449], 99.90th=[54264], 99.95th=[54264], 00:32:52.963 | 99.99th=[56361] 00:32:52.963 bw ( KiB/s): min= 1976, max= 2736, per=4.43%, avg=2226.11, stdev=174.96, samples=19 00:32:52.963 iops : min= 494, max= 684, avg=556.53, stdev=43.74, samples=19 00:32:52.963 lat (msec) : 20=7.91%, 50=91.86%, 100=0.23% 00:32:52.963 cpu : usr=99.01%, sys=0.64%, ctx=16, majf=0, minf=9 00:32:52.963 IO depths : 1=3.5%, 2=7.3%, 4=18.2%, 8=61.3%, 16=9.7%, 32=0.0%, >=64=0.0% 00:32:52.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.963 complete : 0=0.0%, 4=92.6%, 8=2.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.963 issued rwts: total=5575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.963 filename0: (groupid=0, jobs=1): err= 0: pid=132865: Sun Jun 9 23:15:18 2024 00:32:52.963 read: IOPS=523, BW=2096KiB/s (2146kB/s)(20.5MiB/10003msec) 00:32:52.963 slat (nsec): min=5382, max=82908, avg=15777.93, stdev=11952.65 00:32:52.963 clat (usec): min=2411, max=63551, avg=30430.67, stdev=6275.54 00:32:52.963 lat (usec): min=2416, max=63572, avg=30446.45, stdev=6275.60 00:32:52.963 clat percentiles (usec): 00:32:52.963 | 1.00th=[14091], 5.00th=[20055], 10.00th=[26084], 20.00th=[27395], 00:32:52.963 | 30.00th=[28181], 40.00th=[28705], 50.00th=[29230], 60.00th=[29492], 00:32:52.963 | 70.00th=[30278], 80.00th=[35390], 90.00th=[38536], 95.00th=[41157], 00:32:52.963 | 99.00th=[50594], 99.50th=[55313], 99.90th=[58983], 99.95th=[63177], 00:32:52.963 | 99.99th=[63701] 00:32:52.963 bw ( KiB/s): min= 1627, max= 2288, per=4.14%, avg=2078.89, stdev=154.09, samples=19 00:32:52.963 iops : min= 406, max= 572, avg=519.68, stdev=38.65, samples=19 00:32:52.963 lat (msec) : 4=0.31%, 10=0.19%, 20=3.95%, 50=94.54%, 100=1.01% 00:32:52.963 cpu : usr=99.00%, sys=0.66%, ctx=15, majf=0, minf=9 00:32:52.963 IO depths : 1=1.8%, 2=4.0%, 4=13.7%, 8=68.8%, 16=11.7%, 32=0.0%, >=64=0.0% 00:32:52.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.963 complete : 0=0.0%, 4=91.5%, 8=3.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.963 issued rwts: total=5241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.963 filename0: (groupid=0, jobs=1): err= 0: pid=132866: Sun Jun 9 23:15:18 2024 00:32:52.963 read: IOPS=542, BW=2172KiB/s (2224kB/s)(21.3MiB/10021msec) 00:32:52.963 slat (usec): min=5, max=169, avg=14.86, stdev=12.39 00:32:52.963 clat (usec): min=3591, max=57231, avg=29364.01, stdev=6446.27 00:32:52.963 lat (usec): min=3614, max=57292, avg=29378.88, stdev=6447.16 00:32:52.963 clat percentiles (usec): 00:32:52.963 | 1.00th=[ 6325], 5.00th=[18744], 10.00th=[23200], 20.00th=[26870], 00:32:52.963 | 30.00th=[27657], 40.00th=[28181], 50.00th=[28705], 60.00th=[29230], 00:32:52.963 | 70.00th=[30016], 80.00th=[33424], 90.00th=[37487], 95.00th=[40633], 00:32:52.963 | 99.00th=[46924], 99.50th=[48497], 99.90th=[54789], 99.95th=[57410], 00:32:52.963 | 99.99th=[57410] 00:32:52.963 bw ( KiB/s): min= 2024, max= 2720, per=4.32%, avg=2170.00, stdev=147.83, samples=20 00:32:52.963 iops : min= 506, max= 680, avg=542.50, stdev=36.96, samples=20 00:32:52.963 lat (msec) : 4=0.26%, 10=1.51%, 20=4.39%, 50=93.59%, 100=0.26% 00:32:52.963 cpu : usr=98.62%, sys=0.87%, ctx=114, majf=0, minf=0 00:32:52.963 IO depths : 1=1.2%, 2=2.6%, 4=10.8%, 8=72.7%, 16=12.7%, 32=0.0%, >=64=0.0% 00:32:52.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.963 complete : 0=0.0%, 4=90.8%, 8=4.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.963 issued rwts: total=5441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.963 filename0: (groupid=0, jobs=1): err= 0: pid=132867: Sun Jun 9 23:15:18 2024 00:32:52.963 read: IOPS=508, BW=2035KiB/s (2084kB/s)(19.9MiB/10021msec) 00:32:52.963 slat (nsec): min=5387, max=81354, avg=12820.41, stdev=10021.61 00:32:52.964 clat (usec): min=11512, max=60040, avg=31353.49, stdev=6320.90 00:32:52.964 lat (usec): min=11519, max=60060, avg=31366.31, stdev=6321.40 00:32:52.964 clat percentiles (usec): 00:32:52.964 | 1.00th=[16450], 5.00th=[21890], 10.00th=[26346], 20.00th=[27657], 00:32:52.964 | 30.00th=[28181], 40.00th=[28705], 50.00th=[29492], 60.00th=[30278], 00:32:52.964 | 70.00th=[34341], 80.00th=[36963], 90.00th=[39584], 95.00th=[43254], 00:32:52.964 | 99.00th=[49021], 99.50th=[50070], 99.90th=[58983], 99.95th=[59507], 00:32:52.964 | 99.99th=[60031] 00:32:52.964 bw ( KiB/s): min= 1664, max= 2176, per=4.05%, avg=2036.63, stdev=127.00, samples=19 00:32:52.964 iops : min= 416, max= 544, avg=509.16, stdev=31.75, samples=19 00:32:52.964 lat (msec) : 20=3.73%, 50=95.61%, 100=0.67% 00:32:52.964 cpu : usr=98.76%, sys=0.87%, ctx=21, majf=0, minf=9 00:32:52.964 IO depths : 1=1.3%, 2=2.7%, 4=11.1%, 8=72.2%, 16=12.8%, 32=0.0%, >=64=0.0% 00:32:52.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.964 complete : 0=0.0%, 4=90.9%, 8=4.9%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.964 issued rwts: total=5099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.964 filename0: (groupid=0, jobs=1): err= 0: pid=132868: Sun Jun 9 23:15:18 2024 00:32:52.964 read: IOPS=506, BW=2028KiB/s (2077kB/s)(19.8MiB/10006msec) 00:32:52.964 slat (nsec): min=5383, max=89836, avg=14271.58, stdev=11271.75 00:32:52.964 clat (usec): min=9600, max=59527, avg=31474.63, stdev=6553.64 00:32:52.964 lat (usec): min=9606, max=59539, avg=31488.90, stdev=6553.35 00:32:52.964 clat percentiles (usec): 00:32:52.964 | 1.00th=[16057], 5.00th=[21365], 10.00th=[25822], 20.00th=[27395], 00:32:52.964 | 30.00th=[28181], 40.00th=[28705], 50.00th=[29492], 60.00th=[30278], 00:32:52.964 | 70.00th=[34866], 80.00th=[37487], 90.00th=[40109], 95.00th=[42730], 00:32:52.964 | 99.00th=[50594], 99.50th=[53740], 99.90th=[58459], 99.95th=[59507], 00:32:52.964 | 99.99th=[59507] 00:32:52.964 bw ( KiB/s): min= 1880, max= 2176, per=4.04%, avg=2028.37, stdev=91.81, samples=19 00:32:52.964 iops : min= 470, max= 544, avg=507.05, stdev=23.00, samples=19 00:32:52.964 lat (msec) : 10=0.08%, 20=3.47%, 50=95.41%, 100=1.04% 00:32:52.964 cpu : usr=99.00%, sys=0.64%, ctx=17, majf=0, minf=11 00:32:52.964 IO depths : 1=0.5%, 2=1.2%, 4=10.1%, 8=74.6%, 16=13.7%, 32=0.0%, >=64=0.0% 00:32:52.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.964 complete : 0=0.0%, 4=90.8%, 8=5.2%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.964 issued rwts: total=5073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.964 filename1: (groupid=0, jobs=1): err= 0: pid=132870: Sun Jun 9 23:15:18 2024 00:32:52.964 read: IOPS=521, BW=2085KiB/s (2135kB/s)(20.4MiB/10011msec) 00:32:52.964 slat (nsec): min=5376, max=78911, avg=11937.68, stdev=9345.40 00:32:52.964 clat (usec): min=11143, max=76846, avg=30629.48, stdev=6109.85 00:32:52.964 lat (usec): min=11150, max=76864, avg=30641.42, stdev=6110.53 00:32:52.964 clat percentiles (usec): 00:32:52.964 | 1.00th=[17695], 5.00th=[21890], 10.00th=[26084], 20.00th=[27395], 00:32:52.964 | 30.00th=[28181], 40.00th=[28705], 50.00th=[29230], 60.00th=[29754], 00:32:52.964 | 70.00th=[30278], 80.00th=[35914], 90.00th=[38536], 95.00th=[41157], 00:32:52.964 | 99.00th=[46924], 99.50th=[53740], 99.90th=[77071], 99.95th=[77071], 00:32:52.964 | 99.99th=[77071] 00:32:52.964 bw ( KiB/s): min= 1744, max= 2256, per=4.13%, avg=2075.37, stdev=128.40, samples=19 00:32:52.964 iops : min= 436, max= 564, avg=518.84, stdev=32.10, samples=19 00:32:52.964 lat (msec) : 20=3.49%, 50=95.92%, 100=0.59% 00:32:52.964 cpu : usr=99.00%, sys=0.65%, ctx=16, majf=0, minf=9 00:32:52.964 IO depths : 1=1.6%, 2=3.5%, 4=12.1%, 8=70.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:32:52.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.964 complete : 0=0.0%, 4=91.1%, 8=4.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.964 issued rwts: total=5217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.964 filename1: (groupid=0, jobs=1): err= 0: pid=132871: Sun Jun 9 23:15:18 2024 00:32:52.964 read: IOPS=528, BW=2115KiB/s (2165kB/s)(20.7MiB/10026msec) 00:32:52.964 slat (usec): min=5, max=477, avg=14.44, stdev=12.73 00:32:52.964 clat (usec): min=13428, max=60768, avg=30164.24, stdev=5523.75 00:32:52.964 lat (usec): min=13437, max=60794, avg=30178.68, stdev=5523.22 00:32:52.964 clat percentiles (usec): 00:32:52.964 | 1.00th=[17171], 5.00th=[24249], 10.00th=[26608], 20.00th=[27657], 00:32:52.964 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28967], 60.00th=[29492], 00:32:52.964 | 70.00th=[30016], 80.00th=[31327], 90.00th=[36963], 95.00th=[39060], 00:32:52.964 | 99.00th=[53216], 99.50th=[56361], 99.90th=[60556], 99.95th=[60556], 00:32:52.964 | 99.99th=[60556] 00:32:52.964 bw ( KiB/s): min= 1952, max= 2272, per=4.21%, avg=2114.53, stdev=106.57, samples=19 00:32:52.964 iops : min= 488, max= 568, avg=528.63, stdev=26.64, samples=19 00:32:52.964 lat (msec) : 20=2.47%, 50=95.98%, 100=1.55% 00:32:52.964 cpu : usr=98.81%, sys=0.80%, ctx=17, majf=0, minf=9 00:32:52.964 IO depths : 1=0.8%, 2=1.8%, 4=8.6%, 8=75.9%, 16=12.8%, 32=0.0%, >=64=0.0% 00:32:52.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.964 complete : 0=0.0%, 4=90.1%, 8=5.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.964 issued rwts: total=5300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.964 filename1: (groupid=0, jobs=1): err= 0: pid=132872: Sun Jun 9 23:15:18 2024 00:32:52.964 read: IOPS=523, BW=2093KiB/s (2144kB/s)(20.5MiB/10018msec) 00:32:52.964 slat (nsec): min=5380, max=81131, avg=13617.70, stdev=10835.93 00:32:52.964 clat (usec): min=9339, max=53381, avg=30490.23, stdev=5958.74 00:32:52.964 lat (usec): min=9346, max=53388, avg=30503.85, stdev=5959.05 00:32:52.964 clat percentiles (usec): 00:32:52.964 | 1.00th=[15926], 5.00th=[21365], 10.00th=[25822], 20.00th=[27395], 00:32:52.964 | 30.00th=[27919], 40.00th=[28443], 50.00th=[28967], 60.00th=[29492], 00:32:52.964 | 70.00th=[30540], 80.00th=[35390], 90.00th=[39584], 95.00th=[42206], 00:32:52.964 | 99.00th=[46924], 99.50th=[48497], 99.90th=[50070], 99.95th=[51643], 00:32:52.964 | 99.99th=[53216] 00:32:52.964 bw ( KiB/s): min= 1944, max= 2208, per=4.17%, avg=2095.58, stdev=83.84, samples=19 00:32:52.964 iops : min= 486, max= 552, avg=523.89, stdev=20.96, samples=19 00:32:52.964 lat (msec) : 10=0.08%, 20=3.64%, 50=96.15%, 100=0.13% 00:32:52.964 cpu : usr=99.03%, sys=0.59%, ctx=14, majf=0, minf=9 00:32:52.964 IO depths : 1=0.6%, 2=1.3%, 4=8.7%, 8=75.2%, 16=14.2%, 32=0.0%, >=64=0.0% 00:32:52.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.964 complete : 0=0.0%, 4=90.4%, 8=6.2%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.964 issued rwts: total=5243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.964 filename1: (groupid=0, jobs=1): err= 0: pid=132873: Sun Jun 9 23:15:18 2024 00:32:52.964 read: IOPS=523, BW=2095KiB/s (2145kB/s)(20.5MiB/10017msec) 00:32:52.964 slat (usec): min=5, max=481, avg=12.43, stdev=12.08 00:32:52.964 clat (usec): min=10511, max=55428, avg=30464.28, stdev=5739.39 00:32:52.964 lat (usec): min=10518, max=55457, avg=30476.71, stdev=5740.54 00:32:52.964 clat percentiles (usec): 00:32:52.964 | 1.00th=[15008], 5.00th=[21890], 10.00th=[26084], 20.00th=[27395], 00:32:52.964 | 30.00th=[28181], 40.00th=[28705], 50.00th=[29230], 60.00th=[29754], 00:32:52.964 | 70.00th=[30540], 80.00th=[35390], 90.00th=[38536], 95.00th=[41681], 00:32:52.964 | 99.00th=[46924], 99.50th=[49021], 99.90th=[54789], 99.95th=[55313], 00:32:52.964 | 99.99th=[55313] 00:32:52.964 bw ( KiB/s): min= 1864, max= 2256, per=4.16%, avg=2087.58, stdev=106.21, samples=19 00:32:52.964 iops : min= 466, max= 564, avg=521.89, stdev=26.55, samples=19 00:32:52.964 lat (msec) : 20=3.81%, 50=96.00%, 100=0.19% 00:32:52.964 cpu : usr=98.88%, sys=0.77%, ctx=18, majf=0, minf=9 00:32:52.964 IO depths : 1=1.3%, 2=3.7%, 4=14.1%, 8=68.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:32:52.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.964 complete : 0=0.0%, 4=91.6%, 8=3.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.964 issued rwts: total=5246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.964 filename1: (groupid=0, jobs=1): err= 0: pid=132874: Sun Jun 9 23:15:18 2024 00:32:52.964 read: IOPS=537, BW=2152KiB/s (2203kB/s)(21.0MiB/10017msec) 00:32:52.964 slat (nsec): min=5376, max=71379, avg=8520.73, stdev=5227.64 00:32:52.964 clat (usec): min=12618, max=59320, avg=29672.03, stdev=5540.60 00:32:52.964 lat (usec): min=12624, max=59337, avg=29680.55, stdev=5540.76 00:32:52.964 clat percentiles (usec): 00:32:52.964 | 1.00th=[16188], 5.00th=[19792], 10.00th=[24511], 20.00th=[27132], 00:32:52.964 | 30.00th=[27657], 40.00th=[28443], 50.00th=[28705], 60.00th=[29230], 00:32:52.964 | 70.00th=[30016], 80.00th=[33424], 90.00th=[37487], 95.00th=[39584], 00:32:52.964 | 99.00th=[46400], 99.50th=[47973], 99.90th=[50070], 99.95th=[58983], 00:32:52.964 | 99.99th=[59507] 00:32:52.964 bw ( KiB/s): min= 2024, max= 2384, per=4.28%, avg=2150.74, stdev=90.09, samples=19 00:32:52.964 iops : min= 506, max= 596, avg=537.68, stdev=22.52, samples=19 00:32:52.964 lat (msec) : 20=5.59%, 50=94.25%, 100=0.17% 00:32:52.964 cpu : usr=99.02%, sys=0.65%, ctx=14, majf=0, minf=9 00:32:52.964 IO depths : 1=1.1%, 2=4.4%, 4=16.4%, 8=66.1%, 16=12.0%, 32=0.0%, >=64=0.0% 00:32:52.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.964 complete : 0=0.0%, 4=92.1%, 8=2.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.964 issued rwts: total=5388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.964 filename1: (groupid=0, jobs=1): err= 0: pid=132875: Sun Jun 9 23:15:18 2024 00:32:52.964 read: IOPS=509, BW=2037KiB/s (2086kB/s)(19.9MiB/10002msec) 00:32:52.964 slat (nsec): min=5372, max=97168, avg=14414.30, stdev=11549.21 00:32:52.964 clat (usec): min=5016, max=79707, avg=31336.92, stdev=6663.83 00:32:52.964 lat (usec): min=5021, max=79728, avg=31351.34, stdev=6663.29 00:32:52.964 clat percentiles (usec): 00:32:52.964 | 1.00th=[15795], 5.00th=[21627], 10.00th=[26346], 20.00th=[27395], 00:32:52.964 | 30.00th=[28181], 40.00th=[28705], 50.00th=[29230], 60.00th=[30278], 00:32:52.964 | 70.00th=[34341], 80.00th=[36963], 90.00th=[39584], 95.00th=[42206], 00:32:52.964 | 99.00th=[52691], 99.50th=[54789], 99.90th=[65799], 99.95th=[79168], 00:32:52.964 | 99.99th=[80217] 00:32:52.964 bw ( KiB/s): min= 1592, max= 2240, per=4.04%, avg=2030.32, stdev=138.37, samples=19 00:32:52.964 iops : min= 398, max= 560, avg=507.58, stdev=34.59, samples=19 00:32:52.964 lat (msec) : 10=0.20%, 20=3.97%, 50=94.62%, 100=1.22% 00:32:52.964 cpu : usr=98.97%, sys=0.66%, ctx=16, majf=0, minf=9 00:32:52.964 IO depths : 1=0.3%, 2=0.8%, 4=8.7%, 8=76.3%, 16=13.8%, 32=0.0%, >=64=0.0% 00:32:52.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 complete : 0=0.0%, 4=90.3%, 8=5.8%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 issued rwts: total=5094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.965 filename1: (groupid=0, jobs=1): err= 0: pid=132876: Sun Jun 9 23:15:18 2024 00:32:52.965 read: IOPS=575, BW=2300KiB/s (2356kB/s)(22.5MiB/10026msec) 00:32:52.965 slat (usec): min=5, max=483, avg= 9.35, stdev= 9.32 00:32:52.965 clat (usec): min=4259, max=50976, avg=27737.20, stdev=5472.19 00:32:52.965 lat (usec): min=4271, max=50983, avg=27746.56, stdev=5472.85 00:32:52.965 clat percentiles (usec): 00:32:52.965 | 1.00th=[ 5997], 5.00th=[17957], 10.00th=[20055], 20.00th=[26608], 00:32:52.965 | 30.00th=[27395], 40.00th=[27919], 50.00th=[28443], 60.00th=[28705], 00:32:52.965 | 70.00th=[29230], 80.00th=[29754], 90.00th=[31065], 95.00th=[35914], 00:32:52.965 | 99.00th=[44827], 99.50th=[47449], 99.90th=[51119], 99.95th=[51119], 00:32:52.965 | 99.99th=[51119] 00:32:52.965 bw ( KiB/s): min= 2048, max= 2864, per=4.58%, avg=2300.00, stdev=177.91, samples=20 00:32:52.965 iops : min= 512, max= 716, avg=575.00, stdev=44.48, samples=20 00:32:52.965 lat (msec) : 10=1.54%, 20=7.79%, 50=90.46%, 100=0.21% 00:32:52.965 cpu : usr=99.06%, sys=0.59%, ctx=14, majf=0, minf=9 00:32:52.965 IO depths : 1=1.7%, 2=4.1%, 4=13.0%, 8=69.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:32:52.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 complete : 0=0.0%, 4=91.2%, 8=4.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 issued rwts: total=5766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.965 filename1: (groupid=0, jobs=1): err= 0: pid=132878: Sun Jun 9 23:15:18 2024 00:32:52.965 read: IOPS=518, BW=2076KiB/s (2126kB/s)(20.3MiB/10006msec) 00:32:52.965 slat (usec): min=5, max=115, avg=18.18, stdev=14.87 00:32:52.965 clat (usec): min=8255, max=59481, avg=30706.45, stdev=6032.28 00:32:52.965 lat (usec): min=8260, max=59497, avg=30724.63, stdev=6031.77 00:32:52.965 clat percentiles (usec): 00:32:52.965 | 1.00th=[16581], 5.00th=[22152], 10.00th=[26346], 20.00th=[27395], 00:32:52.965 | 30.00th=[27919], 40.00th=[28705], 50.00th=[29230], 60.00th=[29754], 00:32:52.965 | 70.00th=[30802], 80.00th=[35914], 90.00th=[38536], 95.00th=[42206], 00:32:52.965 | 99.00th=[49546], 99.50th=[50594], 99.90th=[56361], 99.95th=[59507], 00:32:52.965 | 99.99th=[59507] 00:32:52.965 bw ( KiB/s): min= 1696, max= 2208, per=4.12%, avg=2071.32, stdev=125.02, samples=19 00:32:52.965 iops : min= 424, max= 552, avg=517.79, stdev=31.30, samples=19 00:32:52.965 lat (msec) : 10=0.04%, 20=3.43%, 50=95.88%, 100=0.65% 00:32:52.965 cpu : usr=99.08%, sys=0.54%, ctx=43, majf=0, minf=9 00:32:52.965 IO depths : 1=0.2%, 2=1.9%, 4=12.6%, 8=71.6%, 16=13.6%, 32=0.0%, >=64=0.0% 00:32:52.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 complete : 0=0.0%, 4=91.7%, 8=3.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 issued rwts: total=5193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.965 filename2: (groupid=0, jobs=1): err= 0: pid=132879: Sun Jun 9 23:15:18 2024 00:32:52.965 read: IOPS=501, BW=2008KiB/s (2056kB/s)(19.6MiB/10018msec) 00:32:52.965 slat (nsec): min=5405, max=91479, avg=15746.55, stdev=13121.99 00:32:52.965 clat (usec): min=10815, max=57042, avg=31762.61, stdev=6465.39 00:32:52.965 lat (usec): min=10827, max=57051, avg=31778.35, stdev=6464.78 00:32:52.965 clat percentiles (usec): 00:32:52.965 | 1.00th=[17171], 5.00th=[23200], 10.00th=[26870], 20.00th=[27657], 00:32:52.965 | 30.00th=[28443], 40.00th=[28967], 50.00th=[29492], 60.00th=[30540], 00:32:52.965 | 70.00th=[34341], 80.00th=[37487], 90.00th=[40109], 95.00th=[44303], 00:32:52.965 | 99.00th=[52167], 99.50th=[53740], 99.90th=[56361], 99.95th=[56886], 00:32:52.965 | 99.99th=[56886] 00:32:52.965 bw ( KiB/s): min= 1840, max= 2152, per=4.00%, avg=2007.16, stdev=79.24, samples=19 00:32:52.965 iops : min= 460, max= 538, avg=501.79, stdev=19.81, samples=19 00:32:52.965 lat (msec) : 20=2.98%, 50=95.92%, 100=1.09% 00:32:52.965 cpu : usr=98.89%, sys=0.72%, ctx=96, majf=0, minf=11 00:32:52.965 IO depths : 1=0.2%, 2=0.6%, 4=8.0%, 8=76.9%, 16=14.4%, 32=0.0%, >=64=0.0% 00:32:52.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 complete : 0=0.0%, 4=90.4%, 8=5.9%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 issued rwts: total=5029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.965 filename2: (groupid=0, jobs=1): err= 0: pid=132880: Sun Jun 9 23:15:18 2024 00:32:52.965 read: IOPS=538, BW=2155KiB/s (2207kB/s)(21.1MiB/10012msec) 00:32:52.965 slat (usec): min=5, max=348, avg=17.61, stdev=16.27 00:32:52.965 clat (usec): min=12622, max=76507, avg=29576.27, stdev=5672.37 00:32:52.965 lat (usec): min=12630, max=76535, avg=29593.88, stdev=5672.77 00:32:52.965 clat percentiles (usec): 00:32:52.965 | 1.00th=[15795], 5.00th=[20317], 10.00th=[25560], 20.00th=[27132], 00:32:52.965 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28705], 60.00th=[29230], 00:32:52.965 | 70.00th=[29754], 80.00th=[30802], 90.00th=[36963], 95.00th=[40633], 00:32:52.965 | 99.00th=[47973], 99.50th=[50070], 99.90th=[60556], 99.95th=[61080], 00:32:52.965 | 99.99th=[76022] 00:32:52.965 bw ( KiB/s): min= 1944, max= 2336, per=4.26%, avg=2142.32, stdev=110.81, samples=19 00:32:52.965 iops : min= 486, max= 584, avg=535.58, stdev=27.70, samples=19 00:32:52.965 lat (msec) : 20=4.34%, 50=95.11%, 100=0.56% 00:32:52.965 cpu : usr=95.94%, sys=2.05%, ctx=57, majf=0, minf=10 00:32:52.965 IO depths : 1=1.4%, 2=3.3%, 4=11.1%, 8=71.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:32:52.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 complete : 0=0.0%, 4=91.0%, 8=4.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 issued rwts: total=5394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.965 filename2: (groupid=0, jobs=1): err= 0: pid=132881: Sun Jun 9 23:15:18 2024 00:32:52.965 read: IOPS=503, BW=2015KiB/s (2063kB/s)(19.7MiB/10005msec) 00:32:52.965 slat (nsec): min=5534, max=98828, avg=17388.03, stdev=14114.51 00:32:52.965 clat (usec): min=8647, max=56597, avg=31666.05, stdev=6047.55 00:32:52.965 lat (usec): min=8656, max=56617, avg=31683.44, stdev=6046.23 00:32:52.965 clat percentiles (usec): 00:32:52.965 | 1.00th=[17695], 5.00th=[24511], 10.00th=[26870], 20.00th=[27657], 00:32:52.965 | 30.00th=[28443], 40.00th=[28967], 50.00th=[29492], 60.00th=[30278], 00:32:52.965 | 70.00th=[34866], 80.00th=[36963], 90.00th=[39060], 95.00th=[42206], 00:32:52.965 | 99.00th=[49021], 99.50th=[53740], 99.90th=[56361], 99.95th=[56361], 00:32:52.965 | 99.99th=[56361] 00:32:52.965 bw ( KiB/s): min= 1736, max= 2192, per=3.99%, avg=2005.05, stdev=120.95, samples=19 00:32:52.965 iops : min= 434, max= 548, avg=501.26, stdev=30.24, samples=19 00:32:52.965 lat (msec) : 10=0.24%, 20=1.96%, 50=97.02%, 100=0.77% 00:32:52.965 cpu : usr=95.08%, sys=2.15%, ctx=111, majf=0, minf=9 00:32:52.965 IO depths : 1=0.1%, 2=0.3%, 4=9.1%, 8=76.1%, 16=14.5%, 32=0.0%, >=64=0.0% 00:32:52.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 complete : 0=0.0%, 4=90.9%, 8=5.2%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.965 filename2: (groupid=0, jobs=1): err= 0: pid=132882: Sun Jun 9 23:15:18 2024 00:32:52.965 read: IOPS=530, BW=2122KiB/s (2173kB/s)(20.7MiB/10008msec) 00:32:52.965 slat (usec): min=5, max=115, avg=15.00, stdev=12.89 00:32:52.965 clat (usec): min=8838, max=60278, avg=30054.94, stdev=5751.52 00:32:52.965 lat (usec): min=8847, max=60295, avg=30069.94, stdev=5752.08 00:32:52.965 clat percentiles (usec): 00:32:52.965 | 1.00th=[16188], 5.00th=[20841], 10.00th=[25035], 20.00th=[27132], 00:32:52.965 | 30.00th=[27919], 40.00th=[28443], 50.00th=[28967], 60.00th=[29492], 00:32:52.965 | 70.00th=[30278], 80.00th=[34866], 90.00th=[38011], 95.00th=[40109], 00:32:52.965 | 99.00th=[46400], 99.50th=[49546], 99.90th=[55837], 99.95th=[60031], 00:32:52.965 | 99.99th=[60031] 00:32:52.965 bw ( KiB/s): min= 1760, max= 2288, per=4.19%, avg=2107.79, stdev=144.55, samples=19 00:32:52.965 iops : min= 440, max= 572, avg=526.95, stdev=36.14, samples=19 00:32:52.965 lat (msec) : 10=0.19%, 20=4.03%, 50=95.33%, 100=0.45% 00:32:52.965 cpu : usr=99.06%, sys=0.60%, ctx=19, majf=0, minf=9 00:32:52.965 IO depths : 1=0.7%, 2=2.4%, 4=12.2%, 8=71.3%, 16=13.4%, 32=0.0%, >=64=0.0% 00:32:52.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 complete : 0=0.0%, 4=91.5%, 8=4.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 issued rwts: total=5310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.965 filename2: (groupid=0, jobs=1): err= 0: pid=132883: Sun Jun 9 23:15:18 2024 00:32:52.965 read: IOPS=499, BW=1999KiB/s (2047kB/s)(19.6MiB/10018msec) 00:32:52.965 slat (usec): min=5, max=106, avg=14.95, stdev=12.34 00:32:52.965 clat (usec): min=11363, max=59034, avg=31916.07, stdev=6905.23 00:32:52.965 lat (usec): min=11376, max=59065, avg=31931.02, stdev=6904.74 00:32:52.965 clat percentiles (usec): 00:32:52.965 | 1.00th=[16057], 5.00th=[21103], 10.00th=[26346], 20.00th=[27657], 00:32:52.965 | 30.00th=[28443], 40.00th=[28967], 50.00th=[29754], 60.00th=[31327], 00:32:52.965 | 70.00th=[35914], 80.00th=[37487], 90.00th=[40633], 95.00th=[44827], 00:32:52.965 | 99.00th=[49546], 99.50th=[53216], 99.90th=[57934], 99.95th=[58983], 00:32:52.965 | 99.99th=[58983] 00:32:52.965 bw ( KiB/s): min= 1792, max= 2120, per=3.97%, avg=1995.79, stdev=76.70, samples=19 00:32:52.965 iops : min= 448, max= 530, avg=498.95, stdev=19.18, samples=19 00:32:52.965 lat (msec) : 20=3.84%, 50=95.17%, 100=1.00% 00:32:52.965 cpu : usr=98.77%, sys=0.86%, ctx=19, majf=0, minf=9 00:32:52.965 IO depths : 1=0.8%, 2=1.7%, 4=9.4%, 8=75.3%, 16=12.8%, 32=0.0%, >=64=0.0% 00:32:52.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 complete : 0=0.0%, 4=90.3%, 8=5.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.965 issued rwts: total=5006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.965 filename2: (groupid=0, jobs=1): err= 0: pid=132884: Sun Jun 9 23:15:18 2024 00:32:52.965 read: IOPS=520, BW=2080KiB/s (2130kB/s)(20.3MiB/10003msec) 00:32:52.965 slat (usec): min=5, max=118, avg=15.72, stdev=12.40 00:32:52.965 clat (usec): min=3763, max=70031, avg=30652.66, stdev=5777.87 00:32:52.965 lat (usec): min=3772, max=70052, avg=30668.37, stdev=5777.24 00:32:52.965 clat percentiles (usec): 00:32:52.965 | 1.00th=[16712], 5.00th=[22152], 10.00th=[26346], 20.00th=[27657], 00:32:52.965 | 30.00th=[28181], 40.00th=[28705], 50.00th=[29230], 60.00th=[29754], 00:32:52.965 | 70.00th=[30540], 80.00th=[35390], 90.00th=[38536], 95.00th=[41157], 00:32:52.965 | 99.00th=[46924], 99.50th=[49021], 99.90th=[56886], 99.95th=[69731], 00:32:52.965 | 99.99th=[69731] 00:32:52.965 bw ( KiB/s): min= 1920, max= 2176, per=4.12%, avg=2069.21, stdev=83.50, samples=19 00:32:52.966 iops : min= 480, max= 544, avg=517.26, stdev=20.93, samples=19 00:32:52.966 lat (msec) : 4=0.04%, 10=0.27%, 20=2.06%, 50=97.14%, 100=0.50% 00:32:52.966 cpu : usr=97.98%, sys=1.15%, ctx=37, majf=0, minf=9 00:32:52.966 IO depths : 1=2.1%, 2=4.2%, 4=13.8%, 8=68.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:32:52.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.966 complete : 0=0.0%, 4=91.5%, 8=4.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.966 issued rwts: total=5202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.966 filename2: (groupid=0, jobs=1): err= 0: pid=132885: Sun Jun 9 23:15:18 2024 00:32:52.966 read: IOPS=549, BW=2196KiB/s (2249kB/s)(21.5MiB/10010msec) 00:32:52.966 slat (nsec): min=5373, max=77201, avg=10140.11, stdev=7401.19 00:32:52.966 clat (usec): min=4794, max=56905, avg=29071.49, stdev=5576.26 00:32:52.966 lat (usec): min=4806, max=56913, avg=29081.63, stdev=5577.12 00:32:52.966 clat percentiles (usec): 00:32:52.966 | 1.00th=[ 9241], 5.00th=[19792], 10.00th=[26084], 20.00th=[27395], 00:32:52.966 | 30.00th=[27919], 40.00th=[28443], 50.00th=[28705], 60.00th=[29230], 00:32:52.966 | 70.00th=[29754], 80.00th=[30278], 90.00th=[35390], 95.00th=[39060], 00:32:52.966 | 99.00th=[48497], 99.50th=[53216], 99.90th=[56886], 99.95th=[56886], 00:32:52.966 | 99.99th=[56886] 00:32:52.966 bw ( KiB/s): min= 2016, max= 2432, per=4.38%, avg=2199.58, stdev=95.46, samples=19 00:32:52.966 iops : min= 504, max= 608, avg=549.89, stdev=23.87, samples=19 00:32:52.966 lat (msec) : 10=1.16%, 20=4.06%, 50=94.20%, 100=0.58% 00:32:52.966 cpu : usr=98.88%, sys=0.73%, ctx=19, majf=0, minf=9 00:32:52.966 IO depths : 1=0.9%, 2=1.9%, 4=9.3%, 8=75.5%, 16=12.4%, 32=0.0%, >=64=0.0% 00:32:52.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.966 complete : 0=0.0%, 4=90.1%, 8=5.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.966 issued rwts: total=5496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.966 filename2: (groupid=0, jobs=1): err= 0: pid=132886: Sun Jun 9 23:15:18 2024 00:32:52.966 read: IOPS=507, BW=2030KiB/s (2078kB/s)(19.8MiB/10010msec) 00:32:52.966 slat (usec): min=5, max=614, avg=16.75, stdev=19.26 00:32:52.966 clat (usec): min=11273, max=74116, avg=31439.64, stdev=6606.25 00:32:52.966 lat (usec): min=11284, max=74139, avg=31456.39, stdev=6606.21 00:32:52.966 clat percentiles (usec): 00:32:52.966 | 1.00th=[16319], 5.00th=[21627], 10.00th=[26084], 20.00th=[27395], 00:32:52.966 | 30.00th=[28181], 40.00th=[28705], 50.00th=[29492], 60.00th=[30278], 00:32:52.966 | 70.00th=[34341], 80.00th=[36963], 90.00th=[39584], 95.00th=[43254], 00:32:52.966 | 99.00th=[49546], 99.50th=[57934], 99.90th=[63177], 99.95th=[73925], 00:32:52.966 | 99.99th=[73925] 00:32:52.966 bw ( KiB/s): min= 1760, max= 2160, per=4.03%, avg=2022.89, stdev=95.28, samples=19 00:32:52.966 iops : min= 440, max= 540, avg=505.68, stdev=23.86, samples=19 00:32:52.966 lat (msec) : 20=3.48%, 50=95.55%, 100=0.96% 00:32:52.966 cpu : usr=96.37%, sys=1.81%, ctx=68, majf=0, minf=9 00:32:52.966 IO depths : 1=0.4%, 2=0.8%, 4=8.8%, 8=75.9%, 16=14.1%, 32=0.0%, >=64=0.0% 00:32:52.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.966 complete : 0=0.0%, 4=90.5%, 8=5.8%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.966 issued rwts: total=5079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:52.966 00:32:52.966 Run status group 0 (all jobs): 00:32:52.966 READ: bw=49.1MiB/s (51.4MB/s), 1999KiB/s-2300KiB/s (2047kB/s-2356kB/s), io=492MiB (516MB), run=10002-10026msec 00:32:52.966 23:15:19 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:52.966 23:15:19 -- target/dif.sh@43 -- # local sub 00:32:52.966 23:15:19 -- target/dif.sh@45 -- # for sub in "$@" 00:32:52.966 23:15:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:52.966 23:15:19 -- target/dif.sh@36 -- # local sub_id=0 00:32:52.966 23:15:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:52.966 23:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.966 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:32:52.966 23:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.966 23:15:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:52.966 23:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.966 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:32:52.966 23:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.966 23:15:19 -- target/dif.sh@45 -- # for sub in "$@" 00:32:52.966 23:15:19 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:52.966 23:15:19 -- target/dif.sh@36 -- # local sub_id=1 00:32:52.966 23:15:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:52.966 23:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.966 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:32:52.966 23:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.966 23:15:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:52.966 23:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.966 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:32:52.966 23:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.966 23:15:19 -- target/dif.sh@45 -- # for sub in "$@" 00:32:52.966 23:15:19 -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:52.966 23:15:19 -- target/dif.sh@36 -- # local sub_id=2 00:32:52.966 23:15:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:52.966 23:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.966 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:32:52.966 23:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.966 23:15:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:52.966 23:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.966 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:32:52.966 23:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.966 23:15:19 -- target/dif.sh@115 -- # NULL_DIF=1 00:32:52.966 23:15:19 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:52.966 23:15:19 -- target/dif.sh@115 -- # numjobs=2 00:32:52.966 23:15:19 -- target/dif.sh@115 -- # iodepth=8 00:32:52.966 23:15:19 -- target/dif.sh@115 -- # runtime=5 00:32:52.966 23:15:19 -- target/dif.sh@115 -- # files=1 00:32:52.966 23:15:19 -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:52.966 23:15:19 -- target/dif.sh@28 -- # local sub 00:32:52.966 23:15:19 -- target/dif.sh@30 -- # for sub in "$@" 00:32:52.966 23:15:19 -- target/dif.sh@31 -- # create_subsystem 0 00:32:52.966 23:15:19 -- target/dif.sh@18 -- # local sub_id=0 00:32:52.966 23:15:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:52.966 23:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.966 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:32:52.966 bdev_null0 00:32:52.966 23:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.966 23:15:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:52.966 23:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.966 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:32:52.966 23:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.966 23:15:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:52.966 23:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.966 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:32:52.966 23:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.966 23:15:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:52.966 23:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.966 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:32:52.966 [2024-06-09 23:15:19.250836] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:52.966 23:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.966 23:15:19 -- target/dif.sh@30 -- # for sub in "$@" 00:32:52.966 23:15:19 -- target/dif.sh@31 -- # create_subsystem 1 00:32:52.966 23:15:19 -- target/dif.sh@18 -- # local sub_id=1 00:32:52.966 23:15:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:52.966 23:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.966 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:32:52.966 bdev_null1 00:32:52.966 23:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.966 23:15:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:52.966 23:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.966 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:32:52.966 23:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.966 23:15:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:52.966 23:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.966 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:32:52.966 23:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.966 23:15:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:52.966 23:15:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:52.966 23:15:19 -- common/autotest_common.sh@10 -- # set +x 00:32:52.966 23:15:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:52.966 23:15:19 -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:52.966 23:15:19 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:52.966 23:15:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:52.966 23:15:19 -- nvmf/common.sh@520 -- # config=() 00:32:52.966 23:15:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.966 23:15:19 -- nvmf/common.sh@520 -- # local subsystem config 00:32:52.966 23:15:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:52.966 23:15:19 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.966 23:15:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:52.966 { 00:32:52.966 "params": { 00:32:52.966 "name": "Nvme$subsystem", 00:32:52.966 "trtype": "$TEST_TRANSPORT", 00:32:52.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:52.966 "adrfam": "ipv4", 00:32:52.966 "trsvcid": "$NVMF_PORT", 00:32:52.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:52.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:52.966 "hdgst": ${hdgst:-false}, 00:32:52.966 "ddgst": ${ddgst:-false} 00:32:52.966 }, 00:32:52.966 "method": "bdev_nvme_attach_controller" 00:32:52.966 } 00:32:52.966 EOF 00:32:52.966 )") 00:32:52.966 23:15:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:52.966 23:15:19 -- target/dif.sh@82 -- # gen_fio_conf 00:32:52.967 23:15:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:52.967 23:15:19 -- target/dif.sh@54 -- # local file 00:32:52.967 23:15:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:52.967 23:15:19 -- target/dif.sh@56 -- # cat 00:32:52.967 23:15:19 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.967 23:15:19 -- common/autotest_common.sh@1320 -- # shift 00:32:52.967 23:15:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:52.967 23:15:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.967 23:15:19 -- nvmf/common.sh@542 -- # cat 00:32:52.967 23:15:19 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.967 23:15:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:52.967 23:15:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:52.967 23:15:19 -- target/dif.sh@72 -- # (( file <= files )) 00:32:52.967 23:15:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:52.967 23:15:19 -- target/dif.sh@73 -- # cat 00:32:52.967 23:15:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:52.967 23:15:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:52.967 { 00:32:52.967 "params": { 00:32:52.967 "name": "Nvme$subsystem", 00:32:52.967 "trtype": "$TEST_TRANSPORT", 00:32:52.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:52.967 "adrfam": "ipv4", 00:32:52.967 "trsvcid": "$NVMF_PORT", 00:32:52.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:52.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:52.967 "hdgst": ${hdgst:-false}, 00:32:52.967 "ddgst": ${ddgst:-false} 00:32:52.967 }, 00:32:52.967 "method": "bdev_nvme_attach_controller" 00:32:52.967 } 00:32:52.967 EOF 00:32:52.967 )") 00:32:52.967 23:15:19 -- target/dif.sh@72 -- # (( file++ )) 00:32:52.967 23:15:19 -- target/dif.sh@72 -- # (( file <= files )) 00:32:52.967 23:15:19 -- nvmf/common.sh@542 -- # cat 00:32:52.967 23:15:19 -- nvmf/common.sh@544 -- # jq . 00:32:52.967 23:15:19 -- nvmf/common.sh@545 -- # IFS=, 00:32:52.967 23:15:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:52.967 "params": { 00:32:52.967 "name": "Nvme0", 00:32:52.967 "trtype": "tcp", 00:32:52.967 "traddr": "10.0.0.2", 00:32:52.967 "adrfam": "ipv4", 00:32:52.967 "trsvcid": "4420", 00:32:52.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:52.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:52.967 "hdgst": false, 00:32:52.967 "ddgst": false 00:32:52.967 }, 00:32:52.967 "method": "bdev_nvme_attach_controller" 00:32:52.967 },{ 00:32:52.967 "params": { 00:32:52.967 "name": "Nvme1", 00:32:52.967 "trtype": "tcp", 00:32:52.967 "traddr": "10.0.0.2", 00:32:52.967 "adrfam": "ipv4", 00:32:52.967 "trsvcid": "4420", 00:32:52.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:52.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:52.967 "hdgst": false, 00:32:52.967 "ddgst": false 00:32:52.967 }, 00:32:52.967 "method": "bdev_nvme_attach_controller" 00:32:52.967 }' 00:32:52.967 23:15:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:52.967 23:15:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:52.967 23:15:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:52.967 23:15:19 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:52.967 23:15:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:52.967 23:15:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:52.967 23:15:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:52.967 23:15:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:52.967 23:15:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:52.967 23:15:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:52.967 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:52.967 ... 00:32:52.967 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:52.967 ... 00:32:52.967 fio-3.35 00:32:52.967 Starting 4 threads 00:32:52.967 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.967 [2024-06-09 23:15:20.229746] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:52.967 [2024-06-09 23:15:20.229795] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:58.257 00:32:58.257 filename0: (groupid=0, jobs=1): err= 0: pid=135684: Sun Jun 9 23:15:25 2024 00:32:58.257 read: IOPS=1778, BW=13.9MiB/s (14.6MB/s)(69.5MiB/5002msec) 00:32:58.257 slat (nsec): min=5335, max=31323, avg=5847.43, stdev=1199.61 00:32:58.257 clat (usec): min=2405, max=8522, avg=4483.00, stdev=718.41 00:32:58.257 lat (usec): min=2410, max=8553, avg=4488.85, stdev=718.48 00:32:58.257 clat percentiles (usec): 00:32:58.257 | 1.00th=[ 3097], 5.00th=[ 3425], 10.00th=[ 3621], 20.00th=[ 3884], 00:32:58.257 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4424], 60.00th=[ 4555], 00:32:58.257 | 70.00th=[ 4752], 80.00th=[ 5080], 90.00th=[ 5473], 95.00th=[ 5800], 00:32:58.257 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[ 7046], 99.95th=[ 8094], 00:32:58.257 | 99.99th=[ 8586] 00:32:58.257 bw ( KiB/s): min=13872, max=14608, per=23.82%, avg=14268.44, stdev=252.83, samples=9 00:32:58.257 iops : min= 1734, max= 1826, avg=1783.56, stdev=31.60, samples=9 00:32:58.257 lat (msec) : 4=24.07%, 10=75.93% 00:32:58.257 cpu : usr=97.50%, sys=2.24%, ctx=7, majf=0, minf=11 00:32:58.257 IO depths : 1=0.1%, 2=1.2%, 4=68.2%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:58.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.257 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.257 issued rwts: total=8895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.257 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:58.257 filename0: (groupid=0, jobs=1): err= 0: pid=135685: Sun Jun 9 23:15:25 2024 00:32:58.257 read: IOPS=1735, BW=13.6MiB/s (14.2MB/s)(67.8MiB/5002msec) 00:32:58.257 slat (nsec): min=5333, max=28052, avg=5856.74, stdev=1190.30 00:32:58.257 clat (usec): min=2709, max=8644, avg=4594.74, stdev=734.91 00:32:58.257 lat (usec): min=2714, max=8650, avg=4600.60, stdev=734.93 00:32:58.257 clat percentiles (usec): 00:32:58.257 | 1.00th=[ 3130], 5.00th=[ 3490], 10.00th=[ 3720], 20.00th=[ 4015], 00:32:58.257 | 30.00th=[ 4178], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4686], 00:32:58.257 | 70.00th=[ 4883], 80.00th=[ 5211], 90.00th=[ 5604], 95.00th=[ 5932], 00:32:58.257 | 99.00th=[ 6521], 99.50th=[ 6783], 99.90th=[ 8094], 99.95th=[ 8455], 00:32:58.257 | 99.99th=[ 8586] 00:32:58.257 bw ( KiB/s): min=13488, max=14288, per=23.15%, avg=13872.00, stdev=250.95, samples=9 00:32:58.257 iops : min= 1686, max= 1786, avg=1734.00, stdev=31.37, samples=9 00:32:58.257 lat (msec) : 4=19.46%, 10=80.54% 00:32:58.257 cpu : usr=97.32%, sys=2.44%, ctx=6, majf=0, minf=9 00:32:58.257 IO depths : 1=0.2%, 2=1.2%, 4=67.7%, 8=30.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:58.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.257 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.257 issued rwts: total=8680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.257 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:58.257 filename1: (groupid=0, jobs=1): err= 0: pid=135686: Sun Jun 9 23:15:25 2024 00:32:58.257 read: IOPS=2257, BW=17.6MiB/s (18.5MB/s)(88.9MiB/5042msec) 00:32:58.257 slat (nsec): min=5340, max=47746, avg=7055.00, stdev=1738.90 00:32:58.257 clat (usec): min=1464, max=44279, avg=3512.38, stdev=1008.55 00:32:58.257 lat (usec): min=1469, max=44285, avg=3519.43, stdev=1008.54 00:32:58.257 clat percentiles (usec): 00:32:58.257 | 1.00th=[ 2343], 5.00th=[ 2638], 10.00th=[ 2802], 20.00th=[ 3032], 00:32:58.257 | 30.00th=[ 3195], 40.00th=[ 3326], 50.00th=[ 3458], 60.00th=[ 3589], 00:32:58.257 | 70.00th=[ 3720], 80.00th=[ 3949], 90.00th=[ 4228], 95.00th=[ 4490], 00:32:58.257 | 99.00th=[ 5080], 99.50th=[ 5342], 99.90th=[ 5866], 99.95th=[ 5997], 00:32:58.257 | 99.99th=[44303] 00:32:58.257 bw ( KiB/s): min=17760, max=18528, per=30.40%, avg=18210.00, stdev=251.88, samples=10 00:32:58.257 iops : min= 2220, max= 2316, avg=2276.20, stdev=31.49, samples=10 00:32:58.257 lat (msec) : 2=0.21%, 4=80.68%, 10=19.06%, 50=0.04% 00:32:58.257 cpu : usr=97.04%, sys=2.68%, ctx=10, majf=0, minf=0 00:32:58.257 IO depths : 1=0.2%, 2=4.5%, 4=66.0%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:58.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.257 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.257 issued rwts: total=11384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.257 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:58.257 filename1: (groupid=0, jobs=1): err= 0: pid=135687: Sun Jun 9 23:15:25 2024 00:32:58.257 read: IOPS=1759, BW=13.7MiB/s (14.4MB/s)(68.8MiB/5001msec) 00:32:58.257 slat (nsec): min=5334, max=30266, avg=7071.85, stdev=1808.27 00:32:58.257 clat (usec): min=2436, max=8500, avg=4528.05, stdev=722.98 00:32:58.257 lat (usec): min=2442, max=8527, avg=4535.12, stdev=723.00 00:32:58.257 clat percentiles (usec): 00:32:58.257 | 1.00th=[ 3097], 5.00th=[ 3458], 10.00th=[ 3687], 20.00th=[ 3949], 00:32:58.257 | 30.00th=[ 4146], 40.00th=[ 4293], 50.00th=[ 4424], 60.00th=[ 4621], 00:32:58.257 | 70.00th=[ 4817], 80.00th=[ 5080], 90.00th=[ 5538], 95.00th=[ 5866], 00:32:58.257 | 99.00th=[ 6521], 99.50th=[ 6718], 99.90th=[ 7242], 99.95th=[ 7308], 00:32:58.257 | 99.99th=[ 8455] 00:32:58.257 bw ( KiB/s): min=13744, max=14288, per=23.47%, avg=14060.44, stdev=193.46, samples=9 00:32:58.257 iops : min= 1718, max= 1786, avg=1757.56, stdev=24.18, samples=9 00:32:58.257 lat (msec) : 4=21.91%, 10=78.09% 00:32:58.257 cpu : usr=97.18%, sys=2.54%, ctx=6, majf=0, minf=9 00:32:58.257 IO depths : 1=0.1%, 2=1.1%, 4=67.5%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:58.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.257 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:58.257 issued rwts: total=8800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:58.257 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:58.257 00:32:58.257 Run status group 0 (all jobs): 00:32:58.257 READ: bw=58.5MiB/s (61.3MB/s), 13.6MiB/s-17.6MiB/s (14.2MB/s-18.5MB/s), io=295MiB (309MB), run=5001-5042msec 00:32:58.257 23:15:25 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:58.257 23:15:25 -- target/dif.sh@43 -- # local sub 00:32:58.257 23:15:25 -- target/dif.sh@45 -- # for sub in "$@" 00:32:58.257 23:15:25 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:58.257 23:15:25 -- target/dif.sh@36 -- # local sub_id=0 00:32:58.257 23:15:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:58.257 23:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.257 23:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:58.257 23:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.257 23:15:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:58.257 23:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.257 23:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:58.257 23:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.257 23:15:25 -- target/dif.sh@45 -- # for sub in "$@" 00:32:58.257 23:15:25 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:58.257 23:15:25 -- target/dif.sh@36 -- # local sub_id=1 00:32:58.257 23:15:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:58.257 23:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.257 23:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:58.257 23:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.257 23:15:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:58.257 23:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.257 23:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:58.257 23:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.257 00:32:58.257 real 0m24.277s 00:32:58.257 user 5m18.204s 00:32:58.257 sys 0m4.045s 00:32:58.257 23:15:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:58.257 23:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:58.257 ************************************ 00:32:58.257 END TEST fio_dif_rand_params 00:32:58.257 ************************************ 00:32:58.257 23:15:25 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:58.257 23:15:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:58.257 23:15:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:58.257 23:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:58.257 ************************************ 00:32:58.257 START TEST fio_dif_digest 00:32:58.257 ************************************ 00:32:58.257 23:15:25 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:32:58.257 23:15:25 -- target/dif.sh@123 -- # local NULL_DIF 00:32:58.258 23:15:25 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:58.258 23:15:25 -- target/dif.sh@125 -- # local hdgst ddgst 00:32:58.258 23:15:25 -- target/dif.sh@127 -- # NULL_DIF=3 00:32:58.258 23:15:25 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:58.258 23:15:25 -- target/dif.sh@127 -- # numjobs=3 00:32:58.258 23:15:25 -- target/dif.sh@127 -- # iodepth=3 00:32:58.258 23:15:25 -- target/dif.sh@127 -- # runtime=10 00:32:58.258 23:15:25 -- target/dif.sh@128 -- # hdgst=true 00:32:58.258 23:15:25 -- target/dif.sh@128 -- # ddgst=true 00:32:58.258 23:15:25 -- target/dif.sh@130 -- # create_subsystems 0 00:32:58.258 23:15:25 -- target/dif.sh@28 -- # local sub 00:32:58.258 23:15:25 -- target/dif.sh@30 -- # for sub in "$@" 00:32:58.258 23:15:25 -- target/dif.sh@31 -- # create_subsystem 0 00:32:58.258 23:15:25 -- target/dif.sh@18 -- # local sub_id=0 00:32:58.258 23:15:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:58.258 23:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.258 23:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:58.258 bdev_null0 00:32:58.258 23:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.258 23:15:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:58.258 23:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.258 23:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:58.258 23:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.258 23:15:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:58.258 23:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.258 23:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:58.258 23:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.258 23:15:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:58.258 23:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:58.258 23:15:25 -- common/autotest_common.sh@10 -- # set +x 00:32:58.258 [2024-06-09 23:15:25.691354] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.258 23:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:58.258 23:15:25 -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:58.258 23:15:25 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:58.258 23:15:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:58.258 23:15:25 -- nvmf/common.sh@520 -- # config=() 00:32:58.258 23:15:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:58.258 23:15:25 -- nvmf/common.sh@520 -- # local subsystem config 00:32:58.258 23:15:25 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:58.258 23:15:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:58.258 23:15:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:58.258 { 00:32:58.258 "params": { 00:32:58.258 "name": "Nvme$subsystem", 00:32:58.258 "trtype": "$TEST_TRANSPORT", 00:32:58.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:58.258 "adrfam": "ipv4", 00:32:58.258 "trsvcid": "$NVMF_PORT", 00:32:58.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:58.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:58.258 "hdgst": ${hdgst:-false}, 00:32:58.258 "ddgst": ${ddgst:-false} 00:32:58.258 }, 00:32:58.258 "method": "bdev_nvme_attach_controller" 00:32:58.258 } 00:32:58.258 EOF 00:32:58.258 )") 00:32:58.258 23:15:25 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:58.258 23:15:25 -- target/dif.sh@82 -- # gen_fio_conf 00:32:58.258 23:15:25 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:58.258 23:15:25 -- target/dif.sh@54 -- # local file 00:32:58.258 23:15:25 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:58.258 23:15:25 -- target/dif.sh@56 -- # cat 00:32:58.258 23:15:25 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:58.258 23:15:25 -- common/autotest_common.sh@1320 -- # shift 00:32:58.258 23:15:25 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:58.258 23:15:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:58.258 23:15:25 -- nvmf/common.sh@542 -- # cat 00:32:58.258 23:15:25 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:58.258 23:15:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:58.258 23:15:25 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:58.258 23:15:25 -- target/dif.sh@72 -- # (( file <= files )) 00:32:58.258 23:15:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:58.258 23:15:25 -- nvmf/common.sh@544 -- # jq . 00:32:58.258 23:15:25 -- nvmf/common.sh@545 -- # IFS=, 00:32:58.258 23:15:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:58.258 "params": { 00:32:58.258 "name": "Nvme0", 00:32:58.258 "trtype": "tcp", 00:32:58.258 "traddr": "10.0.0.2", 00:32:58.258 "adrfam": "ipv4", 00:32:58.258 "trsvcid": "4420", 00:32:58.258 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:58.258 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:58.258 "hdgst": true, 00:32:58.258 "ddgst": true 00:32:58.258 }, 00:32:58.258 "method": "bdev_nvme_attach_controller" 00:32:58.258 }' 00:32:58.258 23:15:25 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:58.258 23:15:25 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:58.258 23:15:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:58.258 23:15:25 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:58.258 23:15:25 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:58.258 23:15:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:58.258 23:15:25 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:58.258 23:15:25 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:58.258 23:15:25 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:58.258 23:15:25 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:58.258 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:58.258 ... 00:32:58.258 fio-3.35 00:32:58.258 Starting 3 threads 00:32:58.258 EAL: No free 2048 kB hugepages reported on node 1 00:32:58.258 [2024-06-09 23:15:26.427125] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:58.258 [2024-06-09 23:15:26.427177] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:10.555 00:33:10.555 filename0: (groupid=0, jobs=1): err= 0: pid=136974: Sun Jun 9 23:15:36 2024 00:33:10.555 read: IOPS=126, BW=15.8MiB/s (16.5MB/s)(158MiB/10037msec) 00:33:10.555 slat (nsec): min=8136, max=72501, avg=9166.33, stdev=2894.12 00:33:10.555 clat (msec): min=8, max=100, avg=23.79, stdev=18.91 00:33:10.555 lat (msec): min=8, max=100, avg=23.80, stdev=18.91 00:33:10.555 clat percentiles (msec): 00:33:10.555 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:33:10.555 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:33:10.555 | 70.00th=[ 17], 80.00th=[ 53], 90.00th=[ 56], 95.00th=[ 58], 00:33:10.556 | 99.00th=[ 94], 99.50th=[ 99], 99.90th=[ 100], 99.95th=[ 102], 00:33:10.556 | 99.99th=[ 102] 00:33:10.556 bw ( KiB/s): min=12032, max=20992, per=33.44%, avg=16153.60, stdev=2926.99, samples=20 00:33:10.556 iops : min= 94, max= 164, avg=126.20, stdev=22.87, samples=20 00:33:10.556 lat (msec) : 10=4.27%, 20=72.33%, 50=0.24%, 100=23.08%, 250=0.08% 00:33:10.556 cpu : usr=96.52%, sys=3.10%, ctx=122, majf=0, minf=122 00:33:10.556 IO depths : 1=2.9%, 2=97.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:10.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.556 issued rwts: total=1265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.556 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:10.556 filename0: (groupid=0, jobs=1): err= 0: pid=136975: Sun Jun 9 23:15:36 2024 00:33:10.556 read: IOPS=136, BW=17.0MiB/s (17.9MB/s)(171MiB/10007msec) 00:33:10.556 slat (nsec): min=5971, max=30372, avg=8781.43, stdev=1266.55 00:33:10.556 clat (usec): min=7545, max=98502, avg=21994.62, stdev=17578.96 00:33:10.556 lat (usec): min=7554, max=98511, avg=22003.40, stdev=17578.95 00:33:10.556 clat percentiles (usec): 00:33:10.556 | 1.00th=[ 8586], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11731], 00:33:10.556 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13960], 60.00th=[14746], 00:33:10.556 | 70.00th=[15926], 80.00th=[51119], 90.00th=[55837], 95.00th=[56886], 00:33:10.556 | 99.00th=[60031], 99.50th=[60556], 99.90th=[95945], 99.95th=[98042], 00:33:10.556 | 99.99th=[98042] 00:33:10.556 bw ( KiB/s): min=13312, max=24064, per=36.06%, avg=17420.80, stdev=2747.77, samples=20 00:33:10.556 iops : min= 104, max= 188, avg=136.10, stdev=21.47, samples=20 00:33:10.556 lat (msec) : 10=6.89%, 20=72.29%, 50=0.59%, 100=20.23% 00:33:10.556 cpu : usr=96.51%, sys=3.02%, ctx=357, majf=0, minf=145 00:33:10.556 IO depths : 1=7.1%, 2=92.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:10.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.556 issued rwts: total=1364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.556 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:10.556 filename0: (groupid=0, jobs=1): err= 0: pid=136976: Sun Jun 9 23:15:36 2024 00:33:10.556 read: IOPS=115, BW=14.4MiB/s (15.1MB/s)(145MiB/10039msec) 00:33:10.556 slat (nsec): min=5679, max=72338, avg=8426.10, stdev=2950.07 00:33:10.556 clat (msec): min=7, max=140, avg=25.95, stdev=21.00 00:33:10.556 lat (msec): min=7, max=140, avg=25.96, stdev=21.00 00:33:10.556 clat percentiles (msec): 00:33:10.556 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 13], 00:33:10.556 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:33:10.556 | 70.00th=[ 18], 80.00th=[ 55], 90.00th=[ 57], 95.00th=[ 58], 00:33:10.556 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 140], 99.95th=[ 140], 00:33:10.556 | 99.99th=[ 140] 00:33:10.556 bw ( KiB/s): min= 8704, max=19200, per=30.65%, avg=14809.60, stdev=3017.94, samples=20 00:33:10.556 iops : min= 68, max= 150, avg=115.70, stdev=23.58, samples=20 00:33:10.556 lat (msec) : 10=5.26%, 20=66.47%, 50=0.26%, 100=27.84%, 250=0.17% 00:33:10.556 cpu : usr=97.03%, sys=2.67%, ctx=13, majf=0, minf=123 00:33:10.556 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:10.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:10.556 issued rwts: total=1160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:10.556 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:10.556 00:33:10.556 Run status group 0 (all jobs): 00:33:10.556 READ: bw=47.2MiB/s (49.5MB/s), 14.4MiB/s-17.0MiB/s (15.1MB/s-17.9MB/s), io=474MiB (497MB), run=10007-10039msec 00:33:10.556 23:15:36 -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:10.556 23:15:36 -- target/dif.sh@43 -- # local sub 00:33:10.556 23:15:36 -- target/dif.sh@45 -- # for sub in "$@" 00:33:10.556 23:15:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:10.556 23:15:36 -- target/dif.sh@36 -- # local sub_id=0 00:33:10.556 23:15:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:10.556 23:15:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:10.556 23:15:36 -- common/autotest_common.sh@10 -- # set +x 00:33:10.556 23:15:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:10.556 23:15:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:10.556 23:15:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:10.556 23:15:36 -- common/autotest_common.sh@10 -- # set +x 00:33:10.556 23:15:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:10.556 00:33:10.556 real 0m11.101s 00:33:10.556 user 0m41.198s 00:33:10.556 sys 0m1.193s 00:33:10.556 23:15:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:10.556 23:15:36 -- common/autotest_common.sh@10 -- # set +x 00:33:10.556 ************************************ 00:33:10.556 END TEST fio_dif_digest 00:33:10.556 ************************************ 00:33:10.556 23:15:36 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:10.556 23:15:36 -- target/dif.sh@147 -- # nvmftestfini 00:33:10.556 23:15:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:10.556 23:15:36 -- nvmf/common.sh@116 -- # sync 00:33:10.556 23:15:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:10.556 23:15:36 -- nvmf/common.sh@119 -- # set +e 00:33:10.556 23:15:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:10.556 23:15:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:10.556 rmmod nvme_tcp 00:33:10.556 rmmod nvme_fabrics 00:33:10.556 rmmod nvme_keyring 00:33:10.556 23:15:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:10.556 23:15:36 -- nvmf/common.sh@123 -- # set -e 00:33:10.556 23:15:36 -- nvmf/common.sh@124 -- # return 0 00:33:10.556 23:15:36 -- nvmf/common.sh@477 -- # '[' -n 126078 ']' 00:33:10.556 23:15:36 -- nvmf/common.sh@478 -- # killprocess 126078 00:33:10.556 23:15:36 -- common/autotest_common.sh@926 -- # '[' -z 126078 ']' 00:33:10.556 23:15:36 -- common/autotest_common.sh@930 -- # kill -0 126078 00:33:10.556 23:15:36 -- common/autotest_common.sh@931 -- # uname 00:33:10.556 23:15:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:10.556 23:15:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126078 00:33:10.556 23:15:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:10.556 23:15:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:10.556 23:15:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126078' 00:33:10.556 killing process with pid 126078 00:33:10.556 23:15:36 -- common/autotest_common.sh@945 -- # kill 126078 00:33:10.556 23:15:36 -- common/autotest_common.sh@950 -- # wait 126078 00:33:10.556 23:15:37 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:33:10.556 23:15:37 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:11.944 Waiting for block devices as requested 00:33:11.944 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:11.944 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:11.944 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:11.944 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:12.205 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:12.205 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:12.205 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:12.465 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:12.465 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:12.727 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:12.727 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:12.727 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:12.727 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:12.989 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:12.989 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:12.989 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:13.251 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:13.513 23:15:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:13.513 23:15:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:13.513 23:15:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:13.513 23:15:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:13.513 23:15:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.513 23:15:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:13.513 23:15:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.432 23:15:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:15.432 00:33:15.432 real 1m16.272s 00:33:15.432 user 8m0.997s 00:33:15.432 sys 0m18.651s 00:33:15.432 23:15:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:15.432 23:15:43 -- common/autotest_common.sh@10 -- # set +x 00:33:15.432 ************************************ 00:33:15.432 END TEST nvmf_dif 00:33:15.432 ************************************ 00:33:15.432 23:15:43 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:15.432 23:15:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:15.432 23:15:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:15.432 23:15:43 -- common/autotest_common.sh@10 -- # set +x 00:33:15.432 ************************************ 00:33:15.432 START TEST nvmf_abort_qd_sizes 00:33:15.432 ************************************ 00:33:15.432 23:15:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:15.694 * Looking for test storage... 00:33:15.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:15.694 23:15:43 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:15.694 23:15:43 -- nvmf/common.sh@7 -- # uname -s 00:33:15.694 23:15:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:15.694 23:15:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:15.694 23:15:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:15.694 23:15:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:15.694 23:15:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:15.694 23:15:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:15.694 23:15:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:15.694 23:15:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:15.694 23:15:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:15.694 23:15:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:15.694 23:15:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:15.694 23:15:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:15.694 23:15:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:15.694 23:15:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:15.694 23:15:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:15.694 23:15:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:15.694 23:15:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.694 23:15:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.694 23:15:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.694 23:15:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.694 23:15:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.694 23:15:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.694 23:15:43 -- paths/export.sh@5 -- # export PATH 00:33:15.694 23:15:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.694 23:15:43 -- nvmf/common.sh@46 -- # : 0 00:33:15.694 23:15:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:15.694 23:15:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:15.694 23:15:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:15.694 23:15:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:15.694 23:15:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:15.694 23:15:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:15.694 23:15:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:15.694 23:15:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:15.694 23:15:43 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:33:15.694 23:15:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:15.694 23:15:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:15.694 23:15:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:15.694 23:15:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:15.694 23:15:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:15.694 23:15:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.694 23:15:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:15.694 23:15:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.694 23:15:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:15.694 23:15:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:15.694 23:15:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:15.694 23:15:43 -- common/autotest_common.sh@10 -- # set +x 00:33:22.293 23:15:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:22.293 23:15:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:22.293 23:15:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:22.293 23:15:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:22.293 23:15:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:22.293 23:15:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:22.293 23:15:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:22.293 23:15:50 -- nvmf/common.sh@294 -- # net_devs=() 00:33:22.293 23:15:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:22.293 23:15:50 -- nvmf/common.sh@295 -- # e810=() 00:33:22.293 23:15:50 -- nvmf/common.sh@295 -- # local -ga e810 00:33:22.293 23:15:50 -- nvmf/common.sh@296 -- # x722=() 00:33:22.293 23:15:50 -- nvmf/common.sh@296 -- # local -ga x722 00:33:22.293 23:15:50 -- nvmf/common.sh@297 -- # mlx=() 00:33:22.293 23:15:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:22.293 23:15:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:22.293 23:15:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:22.293 23:15:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:22.293 23:15:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:22.293 23:15:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:22.293 23:15:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:22.293 23:15:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:22.293 23:15:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:22.293 23:15:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:22.293 23:15:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:22.293 23:15:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:22.293 23:15:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:22.293 23:15:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:22.293 23:15:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:22.293 23:15:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:22.293 23:15:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:22.293 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:22.293 23:15:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:22.293 23:15:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:22.293 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:22.293 23:15:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:22.293 23:15:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:22.293 23:15:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.293 23:15:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:22.293 23:15:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.293 23:15:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:22.293 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:22.293 23:15:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.293 23:15:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:22.293 23:15:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.293 23:15:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:22.293 23:15:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.293 23:15:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:22.293 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:22.293 23:15:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.293 23:15:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:22.293 23:15:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:22.293 23:15:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:22.293 23:15:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:22.293 23:15:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:22.293 23:15:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:22.293 23:15:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:22.293 23:15:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:22.293 23:15:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:22.293 23:15:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:22.293 23:15:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:22.293 23:15:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:22.293 23:15:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:22.293 23:15:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:22.293 23:15:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:22.293 23:15:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:22.293 23:15:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:22.554 23:15:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:22.554 23:15:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:22.554 23:15:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:22.555 23:15:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:22.555 23:15:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:22.555 23:15:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:22.555 23:15:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:22.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:22.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:33:22.555 00:33:22.555 --- 10.0.0.2 ping statistics --- 00:33:22.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.555 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:33:22.555 23:15:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:22.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:22.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.423 ms 00:33:22.555 00:33:22.555 --- 10.0.0.1 ping statistics --- 00:33:22.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.555 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:33:22.555 23:15:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:22.555 23:15:50 -- nvmf/common.sh@410 -- # return 0 00:33:22.555 23:15:50 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:33:22.555 23:15:50 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:25.857 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:25.857 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:25.857 23:15:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:25.857 23:15:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:25.857 23:15:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:25.857 23:15:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:25.857 23:15:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:25.857 23:15:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:25.857 23:15:54 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:33:25.857 23:15:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:25.857 23:15:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:25.857 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:33:26.118 23:15:54 -- nvmf/common.sh@469 -- # nvmfpid=146395 00:33:26.118 23:15:54 -- nvmf/common.sh@470 -- # waitforlisten 146395 00:33:26.118 23:15:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:26.118 23:15:54 -- common/autotest_common.sh@819 -- # '[' -z 146395 ']' 00:33:26.118 23:15:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.118 23:15:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:26.118 23:15:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.118 23:15:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:26.118 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:33:26.118 [2024-06-09 23:15:54.085606] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:33:26.118 [2024-06-09 23:15:54.085653] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.118 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.118 [2024-06-09 23:15:54.151480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:26.118 [2024-06-09 23:15:54.215754] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:26.118 [2024-06-09 23:15:54.215899] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.118 [2024-06-09 23:15:54.215910] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.118 [2024-06-09 23:15:54.215918] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.118 [2024-06-09 23:15:54.216029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.119 [2024-06-09 23:15:54.216146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:26.119 [2024-06-09 23:15:54.216305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.119 [2024-06-09 23:15:54.216306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:26.691 23:15:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:26.691 23:15:54 -- common/autotest_common.sh@852 -- # return 0 00:33:26.691 23:15:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:26.691 23:15:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:26.691 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:33:26.953 23:15:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.953 23:15:54 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:26.953 23:15:54 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:33:26.953 23:15:54 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:33:26.953 23:15:54 -- scripts/common.sh@311 -- # local bdf bdfs 00:33:26.953 23:15:54 -- scripts/common.sh@312 -- # local nvmes 00:33:26.953 23:15:54 -- scripts/common.sh@314 -- # [[ -n 0000:65:00.0 ]] 00:33:26.953 23:15:54 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:26.953 23:15:54 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:33:26.953 23:15:54 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:33:26.953 23:15:54 -- scripts/common.sh@322 -- # uname -s 00:33:26.953 23:15:54 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:33:26.953 23:15:54 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:33:26.953 23:15:54 -- scripts/common.sh@327 -- # (( 1 )) 00:33:26.953 23:15:54 -- scripts/common.sh@328 -- # printf '%s\n' 0000:65:00.0 00:33:26.953 23:15:54 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:33:26.953 23:15:54 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:65:00.0 00:33:26.953 23:15:54 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:33:26.953 23:15:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:26.953 23:15:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:26.953 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:33:26.953 ************************************ 00:33:26.953 START TEST spdk_target_abort 00:33:26.953 ************************************ 00:33:26.953 23:15:54 -- common/autotest_common.sh@1104 -- # spdk_target 00:33:26.953 23:15:54 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:26.953 23:15:54 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:33:26.953 23:15:54 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:33:26.953 23:15:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:26.953 23:15:54 -- common/autotest_common.sh@10 -- # set +x 00:33:27.215 spdk_targetn1 00:33:27.215 23:15:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:27.215 23:15:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:27.215 23:15:55 -- common/autotest_common.sh@10 -- # set +x 00:33:27.215 [2024-06-09 23:15:55.221390] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.215 23:15:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:33:27.215 23:15:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:27.215 23:15:55 -- common/autotest_common.sh@10 -- # set +x 00:33:27.215 23:15:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:33:27.215 23:15:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:27.215 23:15:55 -- common/autotest_common.sh@10 -- # set +x 00:33:27.215 23:15:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:33:27.215 23:15:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:27.215 23:15:55 -- common/autotest_common.sh@10 -- # set +x 00:33:27.215 [2024-06-09 23:15:55.261655] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.215 23:15:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:27.215 23:15:55 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:27.215 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.477 [2024-06-09 23:15:55.421208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:904 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:33:27.477 [2024-06-09 23:15:55.421232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:33:27.477 [2024-06-09 23:15:55.426916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:992 len:8 PRP1 0x2000078be000 PRP2 0x0 00:33:27.477 [2024-06-09 23:15:55.426931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:007d p:1 m:0 dnr:0 00:33:27.477 [2024-06-09 23:15:55.442889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1352 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:33:27.477 [2024-06-09 23:15:55.442905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00aa p:1 m:0 dnr:0 00:33:27.477 [2024-06-09 23:15:55.448908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1424 len:8 PRP1 0x2000078be000 PRP2 0x0 00:33:27.477 [2024-06-09 23:15:55.448921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00b3 p:1 m:0 dnr:0 00:33:27.477 [2024-06-09 23:15:55.456870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1552 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:33:27.477 [2024-06-09 23:15:55.456885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00c4 p:1 m:0 dnr:0 00:33:27.477 [2024-06-09 23:15:55.474843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2008 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:33:27.477 [2024-06-09 23:15:55.474860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00fc p:1 m:0 dnr:0 00:33:27.477 [2024-06-09 23:15:55.481368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2128 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:33:27.477 [2024-06-09 23:15:55.481382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:27.477 [2024-06-09 23:15:55.519890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2912 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:33:27.478 [2024-06-09 23:15:55.519907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:27.478 [2024-06-09 23:15:55.535903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3352 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:33:27.478 [2024-06-09 23:15:55.535919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00a4 p:0 m:0 dnr:0 00:33:27.478 [2024-06-09 23:15:55.550913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3696 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:33:27.478 [2024-06-09 23:15:55.550928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00d1 p:0 m:0 dnr:0 00:33:30.828 Initializing NVMe Controllers 00:33:30.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:30.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:30.828 Initialization complete. Launching workers. 00:33:30.828 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8362, failed: 10 00:33:30.828 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 3029, failed to submit 5343 00:33:30.828 success 816, unsuccess 2213, failed 0 00:33:30.828 23:15:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:30.828 23:15:58 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:30.828 EAL: No free 2048 kB hugepages reported on node 1 00:33:30.828 [2024-06-09 23:15:58.636579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:624 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:33:30.828 [2024-06-09 23:15:58.636622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:33:30.828 [2024-06-09 23:15:58.676540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:1576 len:8 PRP1 0x200007c42000 PRP2 0x0 00:33:30.829 [2024-06-09 23:15:58.676565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:00d2 p:1 m:0 dnr:0 00:33:30.829 [2024-06-09 23:15:58.756462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:3440 len:8 PRP1 0x200007c40000 PRP2 0x0 00:33:30.829 [2024-06-09 23:15:58.756488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:00b6 p:0 m:0 dnr:0 00:33:32.744 [2024-06-09 23:16:00.579634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:44784 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:33:32.744 [2024-06-09 23:16:00.579669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00e1 p:0 m:0 dnr:0 00:33:32.744 [2024-06-09 23:16:00.791563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:49608 len:8 PRP1 0x200007c3e000 PRP2 0x0 00:33:32.744 [2024-06-09 23:16:00.791591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:003a p:1 m:0 dnr:0 00:33:33.688 Initializing NVMe Controllers 00:33:33.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:33.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:33.688 Initialization complete. Launching workers. 00:33:33.688 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8615, failed: 5 00:33:33.688 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1194, failed to submit 7426 00:33:33.688 success 364, unsuccess 830, failed 0 00:33:33.688 23:16:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:33.688 23:16:01 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:33.688 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.631 [2024-06-09 23:16:02.490321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:176 nsid:1 lba:50584 len:8 PRP1 0x200007922000 PRP2 0x0 00:33:34.631 [2024-06-09 23:16:02.490369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:176 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:34.631 [2024-06-09 23:16:02.786668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:186 nsid:1 lba:81224 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:33:34.631 [2024-06-09 23:16:02.786689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:186 cdw0:0 sqhd:005f p:1 m:0 dnr:0 00:33:34.893 [2024-06-09 23:16:02.833111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:182 nsid:1 lba:86168 len:8 PRP1 0x2000078f6000 PRP2 0x0 00:33:34.893 [2024-06-09 23:16:02.833129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:182 cdw0:0 sqhd:00cd p:1 m:0 dnr:0 00:33:35.464 [2024-06-09 23:16:03.345374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:162 nsid:1 lba:138248 len:8 PRP1 0x2000078ec000 PRP2 0x0 00:33:35.464 [2024-06-09 23:16:03.345394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:162 cdw0:0 sqhd:0040 p:1 m:0 dnr:0 00:33:37.376 Initializing NVMe Controllers 00:33:37.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:37.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:37.376 Initialization complete. Launching workers. 00:33:37.376 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 38754, failed: 4 00:33:37.376 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2680, failed to submit 36078 00:33:37.376 success 726, unsuccess 1954, failed 0 00:33:37.376 23:16:05 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:33:37.376 23:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.376 23:16:05 -- common/autotest_common.sh@10 -- # set +x 00:33:37.376 23:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:37.376 23:16:05 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:37.376 23:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:37.376 23:16:05 -- common/autotest_common.sh@10 -- # set +x 00:33:38.765 23:16:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:38.765 23:16:06 -- target/abort_qd_sizes.sh@62 -- # killprocess 146395 00:33:38.765 23:16:06 -- common/autotest_common.sh@926 -- # '[' -z 146395 ']' 00:33:38.765 23:16:06 -- common/autotest_common.sh@930 -- # kill -0 146395 00:33:38.765 23:16:06 -- common/autotest_common.sh@931 -- # uname 00:33:38.765 23:16:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:38.765 23:16:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146395 00:33:38.765 23:16:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:38.765 23:16:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:38.765 23:16:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146395' 00:33:38.765 killing process with pid 146395 00:33:38.765 23:16:06 -- common/autotest_common.sh@945 -- # kill 146395 00:33:38.765 23:16:06 -- common/autotest_common.sh@950 -- # wait 146395 00:33:39.026 00:33:39.026 real 0m12.160s 00:33:39.026 user 0m48.982s 00:33:39.026 sys 0m2.063s 00:33:39.026 23:16:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:39.026 23:16:07 -- common/autotest_common.sh@10 -- # set +x 00:33:39.026 ************************************ 00:33:39.026 END TEST spdk_target_abort 00:33:39.026 ************************************ 00:33:39.026 23:16:07 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:33:39.026 23:16:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:39.026 23:16:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:39.026 23:16:07 -- common/autotest_common.sh@10 -- # set +x 00:33:39.026 ************************************ 00:33:39.026 START TEST kernel_target_abort 00:33:39.026 ************************************ 00:33:39.026 23:16:07 -- common/autotest_common.sh@1104 -- # kernel_target 00:33:39.027 23:16:07 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:33:39.027 23:16:07 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:33:39.027 23:16:07 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:33:39.027 23:16:07 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:33:39.027 23:16:07 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:33:39.027 23:16:07 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:33:39.027 23:16:07 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:39.027 23:16:07 -- nvmf/common.sh@627 -- # local block nvme 00:33:39.027 23:16:07 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:33:39.027 23:16:07 -- nvmf/common.sh@630 -- # modprobe nvmet 00:33:39.027 23:16:07 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:39.027 23:16:07 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:42.333 Waiting for block devices as requested 00:33:42.333 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:42.333 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:42.333 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:42.594 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:42.594 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:42.594 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:42.855 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:42.855 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:42.855 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:43.116 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:43.116 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:43.116 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:43.378 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:43.378 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:43.378 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:43.378 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:43.640 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:43.902 23:16:11 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:33:43.902 23:16:11 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:43.902 23:16:11 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:33:43.902 23:16:11 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:33:43.902 23:16:11 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:43.902 No valid GPT data, bailing 00:33:43.902 23:16:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:43.902 23:16:11 -- scripts/common.sh@393 -- # pt= 00:33:43.902 23:16:11 -- scripts/common.sh@394 -- # return 1 00:33:43.902 23:16:11 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:33:43.902 23:16:11 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:33:43.902 23:16:11 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:33:43.902 23:16:11 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:33:43.902 23:16:11 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:43.902 23:16:11 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:33:43.902 23:16:11 -- nvmf/common.sh@654 -- # echo 1 00:33:43.902 23:16:11 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:33:43.902 23:16:11 -- nvmf/common.sh@656 -- # echo 1 00:33:43.902 23:16:11 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:33:43.902 23:16:11 -- nvmf/common.sh@663 -- # echo tcp 00:33:43.902 23:16:11 -- nvmf/common.sh@664 -- # echo 4420 00:33:43.902 23:16:11 -- nvmf/common.sh@665 -- # echo ipv4 00:33:43.902 23:16:11 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:43.903 23:16:11 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:33:43.903 00:33:43.903 Discovery Log Number of Records 2, Generation counter 2 00:33:43.903 =====Discovery Log Entry 0====== 00:33:43.903 trtype: tcp 00:33:43.903 adrfam: ipv4 00:33:43.903 subtype: current discovery subsystem 00:33:43.903 treq: not specified, sq flow control disable supported 00:33:43.903 portid: 1 00:33:43.903 trsvcid: 4420 00:33:43.903 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:43.903 traddr: 10.0.0.1 00:33:43.903 eflags: none 00:33:43.903 sectype: none 00:33:43.903 =====Discovery Log Entry 1====== 00:33:43.903 trtype: tcp 00:33:43.903 adrfam: ipv4 00:33:43.903 subtype: nvme subsystem 00:33:43.903 treq: not specified, sq flow control disable supported 00:33:43.903 portid: 1 00:33:43.903 trsvcid: 4420 00:33:43.903 subnqn: kernel_target 00:33:43.903 traddr: 10.0.0.1 00:33:43.903 eflags: none 00:33:43.903 sectype: none 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:43.903 23:16:12 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:33:43.903 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.201 Initializing NVMe Controllers 00:33:47.201 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:33:47.201 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:33:47.201 Initialization complete. Launching workers. 00:33:47.201 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 33818, failed: 0 00:33:47.201 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 33818, failed to submit 0 00:33:47.201 success 0, unsuccess 33818, failed 0 00:33:47.201 23:16:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:47.201 23:16:15 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:33:47.201 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.559 Initializing NVMe Controllers 00:33:50.559 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:33:50.559 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:33:50.559 Initialization complete. Launching workers. 00:33:50.559 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 71423, failed: 0 00:33:50.559 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 17982, failed to submit 53441 00:33:50.559 success 0, unsuccess 17982, failed 0 00:33:50.559 23:16:18 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:50.559 23:16:18 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:33:50.559 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.866 Initializing NVMe Controllers 00:33:53.866 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:33:53.866 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:33:53.866 Initialization complete. Launching workers. 00:33:53.866 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 69315, failed: 0 00:33:53.866 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 17342, failed to submit 51973 00:33:53.866 success 0, unsuccess 17342, failed 0 00:33:53.866 23:16:21 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:33:53.866 23:16:21 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:33:53.866 23:16:21 -- nvmf/common.sh@677 -- # echo 0 00:33:53.866 23:16:21 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:33:53.866 23:16:21 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:33:53.866 23:16:21 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:53.866 23:16:21 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:33:53.866 23:16:21 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:33:53.866 23:16:21 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:33:53.866 00:33:53.866 real 0m14.274s 00:33:53.866 user 0m5.163s 00:33:53.866 sys 0m4.115s 00:33:53.866 23:16:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:53.866 23:16:21 -- common/autotest_common.sh@10 -- # set +x 00:33:53.866 ************************************ 00:33:53.866 END TEST kernel_target_abort 00:33:53.866 ************************************ 00:33:53.866 23:16:21 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:33:53.866 23:16:21 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:33:53.866 23:16:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:53.866 23:16:21 -- nvmf/common.sh@116 -- # sync 00:33:53.866 23:16:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:53.866 23:16:21 -- nvmf/common.sh@119 -- # set +e 00:33:53.866 23:16:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:53.866 23:16:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:53.866 rmmod nvme_tcp 00:33:53.866 rmmod nvme_fabrics 00:33:53.866 rmmod nvme_keyring 00:33:53.866 23:16:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:53.866 23:16:21 -- nvmf/common.sh@123 -- # set -e 00:33:53.866 23:16:21 -- nvmf/common.sh@124 -- # return 0 00:33:53.866 23:16:21 -- nvmf/common.sh@477 -- # '[' -n 146395 ']' 00:33:53.866 23:16:21 -- nvmf/common.sh@478 -- # killprocess 146395 00:33:53.866 23:16:21 -- common/autotest_common.sh@926 -- # '[' -z 146395 ']' 00:33:53.866 23:16:21 -- common/autotest_common.sh@930 -- # kill -0 146395 00:33:53.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (146395) - No such process 00:33:53.866 23:16:21 -- common/autotest_common.sh@953 -- # echo 'Process with pid 146395 is not found' 00:33:53.866 Process with pid 146395 is not found 00:33:53.866 23:16:21 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:33:53.866 23:16:21 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:57.174 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:33:57.174 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:33:57.174 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:33:57.174 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:33:57.174 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:33:57.174 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:33:57.174 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:33:57.174 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:33:57.174 0000:65:00.0 (144d a80a): Already using the nvme driver 00:33:57.174 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:33:57.174 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:33:57.174 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:33:57.174 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:33:57.174 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:33:57.174 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:33:57.174 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:33:57.174 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:33:57.174 23:16:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:57.174 23:16:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:57.174 23:16:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:57.174 23:16:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:57.174 23:16:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.174 23:16:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:57.174 23:16:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.723 23:16:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:59.723 00:33:59.723 real 0m43.808s 00:33:59.723 user 0m58.944s 00:33:59.723 sys 0m16.146s 00:33:59.723 23:16:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:59.723 23:16:27 -- common/autotest_common.sh@10 -- # set +x 00:33:59.723 ************************************ 00:33:59.723 END TEST nvmf_abort_qd_sizes 00:33:59.723 ************************************ 00:33:59.723 23:16:27 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:33:59.723 23:16:27 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:33:59.723 23:16:27 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:59.723 23:16:27 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:33:59.723 23:16:27 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:59.723 23:16:27 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:33:59.723 23:16:27 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:33:59.723 23:16:27 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:59.723 23:16:27 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:59.723 23:16:27 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:59.723 23:16:27 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:59.723 23:16:27 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:59.723 23:16:27 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:59.723 23:16:27 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:59.723 23:16:27 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:33:59.723 23:16:27 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:33:59.723 23:16:27 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:33:59.723 23:16:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:59.723 23:16:27 -- common/autotest_common.sh@10 -- # set +x 00:33:59.723 23:16:27 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:33:59.723 23:16:27 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:33:59.723 23:16:27 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:33:59.723 23:16:27 -- common/autotest_common.sh@10 -- # set +x 00:34:07.867 INFO: APP EXITING 00:34:07.867 INFO: killing all VMs 00:34:07.867 INFO: killing vhost app 00:34:07.867 WARN: no vhost pid file found 00:34:07.867 INFO: EXIT DONE 00:34:09.781 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:34:09.781 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:34:09.781 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:34:09.781 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:34:09.781 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:34:10.041 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:34:10.041 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:34:10.041 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:34:10.041 0000:65:00.0 (144d a80a): Already using the nvme driver 00:34:10.041 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:34:10.041 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:34:10.041 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:34:10.041 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:34:10.041 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:34:10.041 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:34:10.041 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:34:10.302 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:34:14.509 Cleaning 00:34:14.509 Removing: /var/run/dpdk/spdk0/config 00:34:14.509 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:14.509 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:14.509 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:14.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:14.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:14.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:14.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:14.510 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:14.510 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:14.510 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:14.510 Removing: /var/run/dpdk/spdk1/config 00:34:14.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:14.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:14.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:14.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:14.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:14.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:14.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:14.510 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:14.510 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:14.510 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:14.510 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:14.510 Removing: /var/run/dpdk/spdk2/config 00:34:14.510 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:14.510 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:14.510 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:14.510 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:14.510 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:14.510 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:14.510 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:14.510 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:14.510 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:14.510 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:14.510 Removing: /var/run/dpdk/spdk3/config 00:34:14.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:14.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:14.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:14.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:14.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:14.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:14.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:14.510 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:14.510 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:14.510 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:14.510 Removing: /var/run/dpdk/spdk4/config 00:34:14.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:14.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:14.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:14.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:14.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:14.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:14.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:14.510 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:14.510 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:14.510 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:14.510 Removing: /dev/shm/bdev_svc_trace.1 00:34:14.510 Removing: /dev/shm/nvmf_trace.0 00:34:14.510 Removing: /dev/shm/spdk_tgt_trace.pid3878429 00:34:14.510 Removing: /var/run/dpdk/spdk0 00:34:14.510 Removing: /var/run/dpdk/spdk1 00:34:14.510 Removing: /var/run/dpdk/spdk2 00:34:14.510 Removing: /var/run/dpdk/spdk3 00:34:14.510 Removing: /var/run/dpdk/spdk4 00:34:14.510 Removing: /var/run/dpdk/spdk_pid102903 00:34:14.510 Removing: /var/run/dpdk/spdk_pid103116 00:34:14.510 Removing: /var/run/dpdk/spdk_pid110337 00:34:14.510 Removing: /var/run/dpdk/spdk_pid110545 00:34:14.510 Removing: /var/run/dpdk/spdk_pid113218 00:34:14.510 Removing: /var/run/dpdk/spdk_pid11331 00:34:14.510 Removing: /var/run/dpdk/spdk_pid120451 00:34:14.510 Removing: /var/run/dpdk/spdk_pid120556 00:34:14.510 Removing: /var/run/dpdk/spdk_pid126392 00:34:14.510 Removing: /var/run/dpdk/spdk_pid12849 00:34:14.510 Removing: /var/run/dpdk/spdk_pid128675 00:34:14.510 Removing: /var/run/dpdk/spdk_pid131040 00:34:14.510 Removing: /var/run/dpdk/spdk_pid132520 00:34:14.510 Removing: /var/run/dpdk/spdk_pid135295 00:34:14.510 Removing: /var/run/dpdk/spdk_pid136737 00:34:14.510 Removing: /var/run/dpdk/spdk_pid14382 00:34:14.510 Removing: /var/run/dpdk/spdk_pid146521 00:34:14.510 Removing: /var/run/dpdk/spdk_pid147110 00:34:14.510 Removing: /var/run/dpdk/spdk_pid147787 00:34:14.510 Removing: /var/run/dpdk/spdk_pid150764 00:34:14.510 Removing: /var/run/dpdk/spdk_pid151269 00:34:14.510 Removing: /var/run/dpdk/spdk_pid151812 00:34:14.510 Removing: /var/run/dpdk/spdk_pid19494 00:34:14.510 Removing: /var/run/dpdk/spdk_pid24548 00:34:14.510 Removing: /var/run/dpdk/spdk_pid33402 00:34:14.510 Removing: /var/run/dpdk/spdk_pid33510 00:34:14.510 Removing: /var/run/dpdk/spdk_pid38748 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3876799 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3878429 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3879002 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3880082 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3880622 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3880991 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3881380 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3881779 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3882095 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3882233 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3882560 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3882954 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3884341 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3888200 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3888564 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3888861 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3888948 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3889339 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3889658 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3890033 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3890183 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3890441 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3890749 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3890867 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3891128 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3891562 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3891914 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3892212 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3892364 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3892544 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3892761 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3892984 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3893158 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3893466 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3893825 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3894145 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3894317 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3894527 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3894884 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3895221 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3895461 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3895609 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3895940 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3896276 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3896606 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3896748 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3897002 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3897336 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3897689 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3897892 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3898081 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3898400 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3898755 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3899067 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3899243 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3899462 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3899817 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3900153 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3900383 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3900547 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3900878 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3901212 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3901562 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3901709 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3901946 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3902281 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3902639 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3902887 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3903064 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3903348 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3903704 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3903768 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3904180 00:34:14.510 Removing: /var/run/dpdk/spdk_pid3908633 00:34:14.510 Removing: /var/run/dpdk/spdk_pid39294 00:34:14.510 Removing: /var/run/dpdk/spdk_pid39641 00:34:14.510 Removing: /var/run/dpdk/spdk_pid40019 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4006596 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4011790 00:34:14.510 Removing: /var/run/dpdk/spdk_pid40123 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4023937 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4030421 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4035731 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4036411 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4046894 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4047251 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4052164 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4058 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4059092 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4062058 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4074113 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4085391 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4087425 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4088450 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4108548 00:34:14.510 Removing: /var/run/dpdk/spdk_pid4113122 00:34:14.511 Removing: /var/run/dpdk/spdk_pid4118326 00:34:14.511 Removing: /var/run/dpdk/spdk_pid4120349 00:34:14.511 Removing: /var/run/dpdk/spdk_pid4122533 00:34:14.511 Removing: /var/run/dpdk/spdk_pid4122737 00:34:14.511 Removing: /var/run/dpdk/spdk_pid4123080 00:34:14.511 Removing: /var/run/dpdk/spdk_pid4123234 00:34:14.511 Removing: /var/run/dpdk/spdk_pid4123839 00:34:14.511 Removing: /var/run/dpdk/spdk_pid4126190 00:34:14.511 Removing: /var/run/dpdk/spdk_pid4127277 00:34:14.511 Removing: /var/run/dpdk/spdk_pid4127693 00:34:14.511 Removing: /var/run/dpdk/spdk_pid4135009 00:34:14.511 Removing: /var/run/dpdk/spdk_pid4141399 00:34:14.511 Removing: /var/run/dpdk/spdk_pid4147461 00:34:14.511 Removing: /var/run/dpdk/spdk_pid41488 00:34:14.511 Removing: /var/run/dpdk/spdk_pid4192447 00:34:14.511 Removing: /var/run/dpdk/spdk_pid43550 00:34:14.511 Removing: /var/run/dpdk/spdk_pid45536 00:34:14.772 Removing: /var/run/dpdk/spdk_pid47460 00:34:14.772 Removing: /var/run/dpdk/spdk_pid49490 00:34:14.772 Removing: /var/run/dpdk/spdk_pid51498 00:34:14.772 Removing: /var/run/dpdk/spdk_pid58918 00:34:14.772 Removing: /var/run/dpdk/spdk_pid59746 00:34:14.772 Removing: /var/run/dpdk/spdk_pid60832 00:34:14.772 Removing: /var/run/dpdk/spdk_pid62142 00:34:14.772 Removing: /var/run/dpdk/spdk_pid68233 00:34:14.772 Removing: /var/run/dpdk/spdk_pid71434 00:34:14.772 Removing: /var/run/dpdk/spdk_pid78067 00:34:14.772 Removing: /var/run/dpdk/spdk_pid85429 00:34:14.772 Removing: /var/run/dpdk/spdk_pid92463 00:34:14.772 Removing: /var/run/dpdk/spdk_pid93158 00:34:14.772 Removing: /var/run/dpdk/spdk_pid93849 00:34:14.772 Removing: /var/run/dpdk/spdk_pid94540 00:34:14.772 Removing: /var/run/dpdk/spdk_pid95609 00:34:14.772 Removing: /var/run/dpdk/spdk_pid96306 00:34:14.772 Removing: /var/run/dpdk/spdk_pid96992 00:34:14.772 Removing: /var/run/dpdk/spdk_pid97708 00:34:14.772 Clean 00:34:14.772 killing process with pid 3820777 00:34:24.851 killing process with pid 3820774 00:34:24.851 killing process with pid 3820776 00:34:24.851 killing process with pid 3820775 00:34:24.851 23:16:52 -- common/autotest_common.sh@1436 -- # return 0 00:34:24.851 23:16:52 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:34:24.851 23:16:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:24.851 23:16:52 -- common/autotest_common.sh@10 -- # set +x 00:34:24.851 23:16:52 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:34:24.851 23:16:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:24.851 23:16:52 -- common/autotest_common.sh@10 -- # set +x 00:34:24.851 23:16:52 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:24.851 23:16:52 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:24.851 23:16:52 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:24.851 23:16:52 -- spdk/autotest.sh@394 -- # hash lcov 00:34:24.851 23:16:52 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:24.851 23:16:52 -- spdk/autotest.sh@396 -- # hostname 00:34:24.851 23:16:52 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:24.851 geninfo: WARNING: invalid characters removed from testname! 00:34:46.819 23:17:14 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:49.358 23:17:17 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:50.735 23:17:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:52.116 23:17:20 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:54.030 23:17:22 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:55.416 23:17:23 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:57.330 23:17:25 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:57.330 23:17:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:57.330 23:17:25 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:57.330 23:17:25 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:57.330 23:17:25 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:57.330 23:17:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.330 23:17:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.330 23:17:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.330 23:17:25 -- paths/export.sh@5 -- $ export PATH 00:34:57.330 23:17:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:57.330 23:17:25 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:57.330 23:17:25 -- common/autobuild_common.sh@435 -- $ date +%s 00:34:57.330 23:17:25 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1717967845.XXXXXX 00:34:57.330 23:17:25 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1717967845.D8bQOt 00:34:57.330 23:17:25 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:34:57.330 23:17:25 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:34:57.330 23:17:25 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:57.330 23:17:25 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:57.330 23:17:25 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:57.330 23:17:25 -- common/autobuild_common.sh@451 -- $ get_config_params 00:34:57.330 23:17:25 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:34:57.330 23:17:25 -- common/autotest_common.sh@10 -- $ set +x 00:34:57.330 23:17:25 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:34:57.330 23:17:25 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:34:57.330 23:17:25 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:57.330 23:17:25 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:57.330 23:17:25 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:34:57.330 23:17:25 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:57.330 23:17:25 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:57.330 23:17:25 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:57.330 23:17:25 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:57.330 23:17:25 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:57.330 23:17:25 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:57.330 + [[ -n 3778327 ]] 00:34:57.330 + sudo kill 3778327 00:34:57.341 [Pipeline] } 00:34:57.357 [Pipeline] // stage 00:34:57.361 [Pipeline] } 00:34:57.377 [Pipeline] // timeout 00:34:57.381 [Pipeline] } 00:34:57.396 [Pipeline] // catchError 00:34:57.400 [Pipeline] } 00:34:57.415 [Pipeline] // wrap 00:34:57.420 [Pipeline] } 00:34:57.433 [Pipeline] // catchError 00:34:57.441 [Pipeline] stage 00:34:57.443 [Pipeline] { (Epilogue) 00:34:57.456 [Pipeline] catchError 00:34:57.457 [Pipeline] { 00:34:57.470 [Pipeline] echo 00:34:57.472 Cleanup processes 00:34:57.478 [Pipeline] sh 00:34:57.831 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:57.831 168044 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:57.846 [Pipeline] sh 00:34:58.134 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:58.134 ++ grep -v 'sudo pgrep' 00:34:58.134 ++ awk '{print $1}' 00:34:58.134 + sudo kill -9 00:34:58.134 + true 00:34:58.145 [Pipeline] sh 00:34:58.429 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:10.686 [Pipeline] sh 00:35:10.974 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:10.974 Artifacts sizes are good 00:35:10.989 [Pipeline] archiveArtifacts 00:35:10.997 Archiving artifacts 00:35:11.246 [Pipeline] sh 00:35:11.534 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:11.550 [Pipeline] cleanWs 00:35:11.560 [WS-CLEANUP] Deleting project workspace... 00:35:11.560 [WS-CLEANUP] Deferred wipeout is used... 00:35:11.567 [WS-CLEANUP] done 00:35:11.569 [Pipeline] } 00:35:11.589 [Pipeline] // catchError 00:35:11.600 [Pipeline] sh 00:35:11.887 + logger -p user.info -t JENKINS-CI 00:35:11.897 [Pipeline] } 00:35:11.914 [Pipeline] // stage 00:35:11.919 [Pipeline] } 00:35:11.936 [Pipeline] // node 00:35:11.942 [Pipeline] End of Pipeline 00:35:11.989 Finished: SUCCESS